Adaptive Quantum GCN
Transcriptomically adapted graph learning drives near-parity depth-2 QAOA warm starts on held-out co-expression graphs while collapsing inference cost relative to classical search.
A unified framework in which graphs serve as the common language connecting QAOA variational quantum optimization and Graph Convolutional Networks for real-data biomedical inference. The repository is organized around three core notebook tracks: an Adaptive Quantum GCN warm-start study, an Adaptive BioGCN biomedical benchmark, and a higher-performing ResidualClinicalGCN extension.
The landing page highlights the strongest quantitative claims from each notebook rather than older intermediate experiments.
Each notebook plays a different technical role. The optimization notebook defends graph-conditioned quantum warm-starting, the biomedical notebook provides the strongest clinical operating point, and the combined notebook shows that both branches fit the same graph-learning thesis in a single reproducible workflow.
Transcriptomically adapted graph learning drives near-parity depth-2 QAOA warm starts on held-out co-expression graphs while collapsing inference cost relative to classical search.
The biomedical branch combines a reproducible benchmark tier with a stronger residual clinical extension, so the notebook supports both stability claims and best-accuracy claims on the CTG graph.
The combined notebook is the clearest thesis-level presentation: one graph-learning formalism supports quantum warm-starting, biomedical classification, robustness reporting, and deployment-facing interpretation in a single walkthrough.
Selected biomedical plots are surfaced directly on the landing page so visitors can inspect the strongest evaluation, robustness, and threshold outputs without opening the full notebook first.
The repository currently exposes standalone PNG figure assets from the biomedical branch. These plots show held-out discrimination, fixed-split robustness, operating-point behavior, and cohort geometry so the landing page includes direct visual context in addition to the notebook and summary sections.
The strongest biomedical operating point reaches 98.8% held-out accuracy with 0.978 ROC AUC, and this figure anchors that claim in the notebook's executed evaluation output.
The shared benchmark model remains stable on a fixed split at 95.49% ± 0.97% accuracy, supporting the repository's reproducibility story rather than relying on a single run.
The landing page now shows that the notebook evaluates sensitivity, false positives, and threshold choice explicitly, which is essential for a risk-sensitive screening framing.
The PCA audit demonstrates that predictions are evaluated against the broader exam geometry, making the clinical result easier to defend as more than a lucky split artifact.
Generate a random graph, then call the GNN to predict QAOA angles (γ, β) and the expected MaxCut value. This interface is an illustrative depth-1 deployment view; the strongest quantitative optimization results remain notebook-grounded.
Demo range: 3 to 10 nodes, which keeps the depth-1 QAOA prediction path fast and stable in both the browser and the API-backed mode.
Click Random Graph to generate a graph.
The demo samples a graph G = (V, E), builds a simple node-feature view from the graph structure, and then asks the model for a good depth-1 QAOA parameter pair (γ, β). In practical terms, the GNN is learning a shortcut from graph structure to quantum-circuit settings, so it can propose angles without rerunning a full classical search loop for every new graph.
The website currently fixes p = 1 so the interface stays fast, the optimization problem remains easy to interpret, and the frontend stays consistent with the current backend model, which predicts one (γ, β) pair rather than a longer sequence of angles.
The value reported as Expected Cut is the probability-weighted average number of graph edges cut by
the final depth-1 QAOA state for the displayed angles. When a reachable /predict endpoint is available,
those angles are proposed by the trained GNN backend; otherwise the page computes a browser-side depth-1 QAOA
baseline using exact state simulation together with a lightweight local refinement step.
Interpretation: look at every edge in the graph one by one. If its two endpoints end up on opposite sides of the partition, that edge contributes 1. If they land on the same side, it contributes 0. Add those edge contributions together, and you get the cut value.
Interpretation: start from a state that treats all possible graph partitions equally. Then apply one cost step controlled by γ, which biases the state toward better cuts, and one mixer step controlled by β, which redistributes probability across candidate solutions. That final quantum state is the one evaluated by the demo.
Interpretation: this is the average cut score produced by the final QAOA state. The algorithm assigns probabilities to many possible graph partitions, computes each partition's cut value, and then takes the probability-weighted average. Better angles are the ones that push more probability onto high-cut solutions, which makes this expected value larger.
Use this section as the single launch point for the three full notebook walkthroughs, each opened inline or in a separate tab.
Graphs serve as the common representation for both branches. On the quantum side, the repository's user-facing Adaptive Quantum GCN predicts QAOA warm-start angles (γ, β) from graph structure, reducing dependence on repeated classical search. On the biomedical side, Adaptive BioGCN provides a stable benchmark tier while ResidualClinicalGCN captures the strongest held-out CTG operating point.
| Metric | Value |
|---|---|
| Held-out QAOA ratio (adaptive vs classical) | 0.868 vs 0.869 |
| Held-out optimization quality retained | 99.95% |
| Median adaptive warm-start speedup | ~1.41 × 10^4× |
| Best CTG held-out accuracy | 98.8% |
| Best biomedical ROC AUC | 0.978 |
| Adaptive BioGCN benchmark robustness | 95.49% ± 0.97% |
| Integrated notebook warm-start speedup | 35,070× |
pip install -r requirements.txt python -m src.train \ --dataset-size 20 --n 6 \ --p 1 --epochs 10 \ --model-path model.pt python -m src.server