Research · Hybrid Quantum-Classical AI

Hybrid Quantum–Graph AI

A unified framework in which graphs serve as the common language connecting QAOA variational quantum optimization and Graph Convolutional Networks for real-data biomedical inference. The repository is organized around three core notebook tracks: an Adaptive Quantum GCN warm-start study, an Adaptive BioGCN biomedical benchmark, and a higher-performing ResidualClinicalGCN extension.

QAOA GNN / GCN MaxCut Genomics Clinical AI PyTorch Exact Statevector
Best CTG result
98.8%
ResidualClinicalGCN held-out accuracy
Quality retained
99.95%
Adaptive Quantum GCN versus depth-2 classical QAOA
Warm-start speedup
35,070×
Representative integrated hybrid notebook result
Pathologic detection
31 / 35
With one false positive at the strongest operating point

Contribution Snapshot

The landing page highlights the strongest quantitative claims from each notebook rather than older intermediate experiments.

Each notebook plays a different technical role. The optimization notebook defends graph-conditioned quantum warm-starting, the biomedical notebook provides the strongest clinical operating point, and the combined notebook shows that both branches fit the same graph-learning thesis in a single reproducible workflow.

Optimization branch

Adaptive Quantum GCN

Transcriptomically adapted graph learning drives near-parity depth-2 QAOA warm starts on held-out co-expression graphs while collapsing inference cost relative to classical search.

Held-out ratio
0.868 vs 0.869
adaptive versus classical
Quality retained
99.95%
of depth-2 classical quality
Inference speedup
~14,122×
median single-pass acceleration
This is the repository's strongest optimization result: the adaptive model preserves essentially all of the classical benchmark quality while remaining fast enough to motivate warm-start deployment.
Biomedical branch

Adaptive BioGCN + ResidualClinicalGCN

The biomedical branch combines a reproducible benchmark tier with a stronger residual clinical extension, so the notebook supports both stability claims and best-accuracy claims on the CTG graph.

Best held-out accuracy
98.8%
ResidualClinicalGCN
ROC AUC
0.978
at the strongest operating point
Benchmark robustness
95.49% ± 0.97%
Adaptive BioGCN fixed split
At the strongest operating point, the residual model detects 31 / 35 pathologic exams with 1 false positive, while the Adaptive BioGCN benchmark preserves a stable fixed-split evaluation story.
Integrated branch

Integrated Hybrid Narrative

The combined notebook is the clearest thesis-level presentation: one graph-learning formalism supports quantum warm-starting, biomedical classification, robustness reporting, and deployment-facing interpretation in a single walkthrough.

Warm-start speedup
35,070×
representative integrated result
Benchmark accuracy
96.71%
Adaptive BioGCN
Integrated residual result
96.5%
combined evaluation
This notebook is the strongest top-down presentation asset for technical interviews because it ties the quantum and biomedical branches together rather than treating them as separate prototypes.

Figure Gallery

Selected biomedical plots are surfaced directly on the landing page so visitors can inspect the strongest evaluation, robustness, and threshold outputs without opening the full notebook first.

The repository currently exposes standalone PNG figure assets from the biomedical branch. These plots show held-out discrimination, fixed-split robustness, operating-point behavior, and cohort geometry so the landing page includes direct visual context in addition to the notebook and summary sections.

Held-out evaluation
98.8%
best biomedical accuracy shown below
Robustness
95.49% ± 0.97%
fixed-split Adaptive BioGCN stability
Operating point
31 / 35
pathologic exams detected with 1 false positive

Interactive Graph Demo

Generate a random graph, then call the GNN to predict QAOA angles (γ, β) and the expected MaxCut value. This interface is an illustrative depth-1 deployment view; the strongest quantitative optimization results remain notebook-grounded.

6

Demo range: 3 to 10 nodes, which keeps the depth-1 QAOA prediction path fast and stable in both the browser and the API-backed mode.

Click Random Graph to generate a graph.

What the prediction means

The demo samples a graph G = (V, E), builds a simple node-feature view from the graph structure, and then asks the model for a good depth-1 QAOA parameter pair (γ, β). In practical terms, the GNN is learning a shortcut from graph structure to quantum-circuit settings, so it can propose angles without rerunning a full classical search loop for every new graph.

The website currently fixes p = 1 so the interface stays fast, the optimization problem remains easy to interpret, and the frontend stays consistent with the current backend model, which predicts one (γ, β) pair rather than a longer sequence of angles.

The value reported as Expected Cut is the probability-weighted average number of graph edges cut by the final depth-1 QAOA state for the displayed angles. When a reachable /predict endpoint is available, those angles are proposed by the trained GNN backend; otherwise the page computes a browser-side depth-1 QAOA baseline using exact state simulation together with a lightweight local refinement step.

MaxCut Objective
C ( z ) = ( u , v ) E 1 - zu zv 2

Interpretation: look at every edge in the graph one by one. If its two endpoints end up on opposite sides of the partition, that edge contributes 1. If they land on the same side, it contributes 0. Add those edge contributions together, and you get the cut value.

Depth-1 QAOA State
| γ , β = UB (β) UC (γ) | + n

Interpretation: start from a state that treats all possible graph partitions equally. Then apply one cost step controlled by γ, which biases the state toward better cuts, and one mixer step controlled by β, which redistributes probability across candidate solutions. That final quantum state is the one evaluated by the demo.

Quantity Being Maximized
C = γ , β | C ^ | γ , β

Interpretation: this is the average cut score produced by the final QAOA state. The algorithm assigns probabilities to many possible graph partitions, computes each partition's cut value, and then takes the probability-weighted average. Better angles are the ones that push more probability onto high-cut solutions, which makes this expected value larger.

📓

Notebook Guide

Use this section as the single launch point for the three full notebook walkthroughs, each opened inline or in a separate tab.

⚛ Adaptive Quantum GCN Tutorial
📄

Research Summary

Open research_paper.md

Approach

Graphs serve as the common representation for both branches. On the quantum side, the repository's user-facing Adaptive Quantum GCN predicts QAOA warm-start angles (γ, β) from graph structure, reducing dependence on repeated classical search. On the biomedical side, Adaptive BioGCN provides a stable benchmark tier while ResidualClinicalGCN captures the strongest held-out CTG operating point.

Key Results

MetricValue
Held-out QAOA ratio (adaptive vs classical)0.868 vs 0.869
Held-out optimization quality retained99.95%
Median adaptive warm-start speedup~1.41 × 10^4×
Best CTG held-out accuracy98.8%
Best biomedical ROC AUC0.978
Adaptive BioGCN benchmark robustness95.49% ± 0.97%
Integrated notebook warm-start speedup35,070×

Scope & Limitations

  • The website demo is a depth-1 presentation layer; the strongest optimization results come from notebook-level depth-2 evaluations
  • QAOA simulation remains exact statevector and therefore scales exponentially; the reported studies stay in the small-graph regime
  • Biomedical evaluation is retrospective research output, not a clinically validated deployment system
  • The best biomedical result comes from the residual extension, while the benchmark model is retained for reproducibility and robustness analysis

Quick Start

pip install -r requirements.txt
python -m src.train \
  --dataset-size 20 --n 6 \
  --p 1 --epochs 10 \
  --model-path model.pt
python -m src.server