Research Demo PyTorch · Flask · Transformer Time-Series Analysis

Quantum Drift Forecasting

Time-Series and Transformer-Based Modeling of Quantum Hardware Behavior, Calibration Drift, and Noise Analysis

Superconducting quantum processors undergo continuous decoherence-time (T1, T2) drift and gate-fidelity degradation between calibration cycles. This project trains recurrent and Transformer architectures on multi-qubit telemetry to forecast calibration metrics, detect anomalous drift, and schedule proactive recalibration — reducing wasted compute time on mis-calibrated devices.

Interactive Drift Simulation

Configure the qubit signal parameters below, generate a synthetic T₁ coherence-time series, then run the forecaster. If the local Flask API server is running (python -m src.server), the LSTM model is queried directly — otherwise an exponential-smoothing fallback runs entirely in your browser.

Signal Parameters

30120
0.510.0
0.0000.250

60 steps · 8-step forecast

⬤ STABLE — drift 0.0%

T₁ Coherence Time Forecast

Jupyter Notebook Reports

Each notebook is a self-contained end-to-end experiment covering data exploration, model training, evaluation, and uncertainty quantification. Run them locally with jupyter lab or view the pre-rendered HTML exports below.

📊

RNN Drift Forecasting

Deep tutorial on VanillaRNN, LSTM & GRU for qubit drift forecasting. Derives all gate equations, ACF/PSD stationarity analysis, AdamW + CosineAnnealing training, PR curves & threshold selection, MC-Dropout Bayesian uncertainty, conformal prediction intervals, and practical deployment applications.

LSTMGRU MC-DropoutConformal ACF/ADFPR Curves
🔬

Transformer Calibration

Comprehensive Transformer tutorial: derives multi-head self-attention, sinusoidal positional encoding, Pre-LN vs Post-LN stability, GELU activation, attention pattern visualisation, unsupervised anomaly detection with encoder-decoder AE, early-warning classification, and cross-qubit transfer learning applications.

TransformerSelf-Attention Anomaly DetectionTransfer Learning Pos. EncodingEarly Warning
⚛️

Combined Multi-Qubit Pipeline

Unified 5-qubit pipeline: cross-qubit T₁ correlation heatmaps, macro-averaged MAPE evaluation, Youden J-statistic threshold optimisation, cost-model recalibration policy simulation, bootstrap confidence intervals, paired Wilcoxon signed-rank tests with Bonferroni correction, and adaptive scheduling deployment guide.

Multi-QubitCost Model Bootstrap CIWilcoxon Test RecalibrationDeployment

Model Architectures

All models share the same dual-output interface: forward(x) → (forecast_horizon, drift_logit), enabling simultaneous regression (MSE) and binary classification (BCE) with a combined loss α·MSE + (1−α)·BCE.

RNN

VanillaRNN

Single-layer Elman RNN baseline. Captures short-range temporal dependencies. Useful as an ablation-study reference against gated architectures.

  • Hidden dim: 64
  • Horizon: 8 steps
  • Parameters: ~18 K
LSTM

LSTMForecaster

2-layer stacked LSTM with 0.2 dropout. The gated cell state mitigates vanishing gradients, enabling longer-range drift trend modeling over 48-step histories.

  • Hidden dim: 128, 2 layers
  • Dropout: 0.2
  • Parameters: ~133 K
GRU

GRUForecaster

2-layer GRU variant of the LSTM. Fewer parameters with comparable performance on short-horizon calibration drift, offering a favorable accuracy/compute tradeoff.

  • Hidden dim: 128, 2 layers
  • Dropout: 0.2
  • Parameters: ~100 K
Transformer

TransformerForecaster

Pre-LayerNorm encoder-only Transformer with sinusoidal positional encodings. Multi-head attention (4 heads) captures non-local temporal correlations across the full input window.

  • d_model=128, 4 heads, 3 layers
  • FFN dim: 256, dropout: 0.1
  • Parameters: ~265 K
AE

AnomalyDetector

Encoder-decoder (autoencoder) trained exclusively on stable windows. High reconstruction error at inference time flags anomalous or rapidly drifting segments without requiring drift labels.

  • Hidden dim: 64, 2 layers
  • Loss: MSE reconstruction
  • Unsupervised (no labels)

Research Summary

The complete four-stage pipeline from raw telemetry to actionable recalibration decisions.

01

Data Acquisition

Synthetic multi-qubit telemetry at 0.5 h resolution (5 qubits × 200 steps). Features: T₁, T₂, 1Q/2Q gate fidelity, readout error, cross-resonance phase, gate error per Clifford.

02

Sequence Modeling

Sliding-window datasets (seq_len=24–48, horizon=8). Min-Max normalization per qubit. Temporal 80/20 train-test split to prevent data leakage.

03

Training & Evaluation

AdamW optimizer + CosineAnnealingLR. Combined loss α=0.7 MSE + 0.3 BCE. Metrics: MAE, RMSE, MAPE (regression), Precision/Recall/F1/AUC (classification).

04

Uncertainty & Deployment

MC-Dropout sampling + conformal prediction intervals. Flask REST API for live inference. Recalibration trigger simulation shows ~30% reduction in wasted cycles vs. fixed-interval scheduling.

Quick Start

Run the full pipeline in five commands.

# 1. Clone & install
git clone https://github.com/mohuyn/Quantum-Drift-Forecasting.git
cd Quantum-Drift-Forecasting
pip install -r requirements.txt

# 2. Train (default: LSTM, 60 epochs, seq_len=24, horizon=8)
python -m src.train --model lstm --epochs 60

# 3. Train all models
for MODEL in rnn lstm gru transformer; do
    python -m src.train --model $MODEL --epochs 60
done

# 4. Start the inference API
python -m src.server --port 5000

# 5. Open the demo
open index.html   # or just drag into a browser