Time-Series and Transformer-Based Modeling of Quantum Hardware Behavior, Calibration Drift, and Noise Analysis
Superconducting quantum processors undergo continuous decoherence-time (T1, T2) drift and gate-fidelity degradation between calibration cycles. This project trains recurrent and Transformer architectures on multi-qubit telemetry to forecast calibration metrics, detect anomalous drift, and schedule proactive recalibration — reducing wasted compute time on mis-calibrated devices.
Configure the qubit signal parameters below, generate a synthetic T₁
coherence-time series, then run the forecaster. If the local Flask API
server is running (python -m src.server), the LSTM model
is queried directly — otherwise an exponential-smoothing fallback runs
entirely in your browser.
60 steps · 8-step forecast
Each notebook is a self-contained end-to-end experiment covering data
exploration, model training, evaluation, and uncertainty quantification.
Run them locally with jupyter lab or view the pre-rendered
HTML exports below.
Deep tutorial on VanillaRNN, LSTM & GRU for qubit drift forecasting. Derives all gate equations, ACF/PSD stationarity analysis, AdamW + CosineAnnealing training, PR curves & threshold selection, MC-Dropout Bayesian uncertainty, conformal prediction intervals, and practical deployment applications.
Comprehensive Transformer tutorial: derives multi-head self-attention, sinusoidal positional encoding, Pre-LN vs Post-LN stability, GELU activation, attention pattern visualisation, unsupervised anomaly detection with encoder-decoder AE, early-warning classification, and cross-qubit transfer learning applications.
Unified 5-qubit pipeline: cross-qubit T₁ correlation heatmaps, macro-averaged MAPE evaluation, Youden J-statistic threshold optimisation, cost-model recalibration policy simulation, bootstrap confidence intervals, paired Wilcoxon signed-rank tests with Bonferroni correction, and adaptive scheduling deployment guide.
All models share the same dual-output interface:
forward(x) → (forecast_horizon, drift_logit),
enabling simultaneous regression (MSE) and binary classification (BCE)
with a combined loss α·MSE + (1−α)·BCE.
Single-layer Elman RNN baseline. Captures short-range temporal dependencies. Useful as an ablation-study reference against gated architectures.
2-layer stacked LSTM with 0.2 dropout. The gated cell state mitigates vanishing gradients, enabling longer-range drift trend modeling over 48-step histories.
2-layer GRU variant of the LSTM. Fewer parameters with comparable performance on short-horizon calibration drift, offering a favorable accuracy/compute tradeoff.
Pre-LayerNorm encoder-only Transformer with sinusoidal positional encodings. Multi-head attention (4 heads) captures non-local temporal correlations across the full input window.
Encoder-decoder (autoencoder) trained exclusively on stable windows. High reconstruction error at inference time flags anomalous or rapidly drifting segments without requiring drift labels.
The complete four-stage pipeline from raw telemetry to actionable recalibration decisions.
Synthetic multi-qubit telemetry at 0.5 h resolution (5 qubits × 200 steps). Features: T₁, T₂, 1Q/2Q gate fidelity, readout error, cross-resonance phase, gate error per Clifford.
Sliding-window datasets (seq_len=24–48, horizon=8). Min-Max normalization per qubit. Temporal 80/20 train-test split to prevent data leakage.
AdamW optimizer + CosineAnnealingLR. Combined loss α=0.7 MSE + 0.3 BCE. Metrics: MAE, RMSE, MAPE (regression), Precision/Recall/F1/AUC (classification).
MC-Dropout sampling + conformal prediction intervals. Flask REST API for live inference. Recalibration trigger simulation shows ~30% reduction in wasted cycles vs. fixed-interval scheduling.
Run the full pipeline in five commands.
# 1. Clone & install
git clone https://github.com/mohuyn/Quantum-Drift-Forecasting.git
cd Quantum-Drift-Forecasting
pip install -r requirements.txt
# 2. Train (default: LSTM, 60 epochs, seq_len=24, horizon=8)
python -m src.train --model lstm --epochs 60
# 3. Train all models
for MODEL in rnn lstm gru transformer; do
python -m src.train --model $MODEL --epochs 60
done
# 4. Start the inference API
python -m src.server --port 5000
# 5. Open the demo
open index.html # or just drag into a browser