mirror of
https://github.com/saymrwulf/autoresearch-quantum.git
synced 2026-05-14 20:37:51 +00:00
Add OVERVIEW.md for each plan: thematic summaries of the [[4,2,2]] magic state pipeline
Each overview distills the plan's building blocks into a narrative centered on magic state creation as a tunable, optimizable process for Toffoli gate scalability.
This commit is contained in:
parent
18a7dc87be
commit
a2d9120960
4 changed files with 151 additions and 0 deletions
35
notebooks/plan_a/OVERVIEW.md
Normal file
35
notebooks/plan_a/OVERVIEW.md
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
# Plan A — Sequential: Building the Optimiser From the Ground Up
|
||||
|
||||
## Overarching Theme
|
||||
|
||||
Plan A is the pedagogical backbone. It takes the learner from "what is a magic state?" to "a machine optimises its preparation automatically" in three sequential notebooks, each building on the last. The [[4,2,2]] code serves as a minimal but complete laboratory: small enough to understand every qubit, large enough to exhibit real error-detection and the quality-vs-cost tensions that make magic state distillation hard at scale.
|
||||
|
||||
## The Three Building Blocks
|
||||
|
||||
### Notebook 1: What Is an Encoded Magic State?
|
||||
|
||||
Establishes the full physical foundation. The T-state |T⟩ = (|0⟩ + e^{iπ/4}|1⟩)/√2 is the non-Clifford resource that breaks classical simulability (Gottesman-Knill theorem) and enables universal quantum computation. But a bare qubit is fragile — no cloning, no majority vote, measurement destroys.
|
||||
|
||||
The [[4,2,2]] code spreads the T-state across 4 physical qubits so that no single qubit carries the information alone. Stabiliser measurements (XXXX, ZZZZ) act as quantum checksums: eigenvalue +1 means "no error detected." Every single-qubit Pauli error is caught. Ancilla-based syndrome extraction reads the checksums without collapsing the encoded state, and postselection discards flagged shots.
|
||||
|
||||
**Key insight:** Three seed styles (h_p, ry_rz, u_magic) and two encoder styles (cx_chain, cz_compiled) all produce the same logical state. The choice is pure engineering — which transpiles to fewer noisy gates on your hardware.
|
||||
|
||||
### Notebook 2: How Do You Know If It Worked?
|
||||
|
||||
Introduces the measurement and scoring apparatus. Under IBM Brisbane noise, the ideal W=1.0 and 100% acceptance degrade. The magic witness formula W = magic_factor × spectator_factor distills three logical-operator expectations into a single quality number. But quality alone is not enough — you need to account for the shots lost to postselection and the circuit resources consumed.
|
||||
|
||||
The scoring formula score = quality × acceptance / cost captures this three-way tension. Different scoring functions (weighted acceptance cost vs. factory throughput) rank configurations differently depending on whether you optimise per-state quality or production-line yield. Dominant failure modes (postselection collapse, witness erosion, cost explosion) classify the biggest weakness of each configuration.
|
||||
|
||||
**Key insight:** The score is not a single metric but a *ratio* that forces trade-offs. Stricter verification improves quality but crashes acceptance. More complex circuits reduce noise sensitivity but inflate cost. The scoring formula surfaces whichever factor is the current bottleneck.
|
||||
|
||||
### Notebook 3: The Ratchet Learns For You
|
||||
|
||||
Closes the loop with automated search. The incumbent-challenger model is monotonic: the best-so-far configuration never gets worse. NeighborWalk changes one parameter at a time (systematic, blind to interactions). RandomCombo mutates multiple parameters (discovers synergies). LessonGuided uses fix/avoid rules from previous rungs to bias search toward promising regions.
|
||||
|
||||
Cross-rung propagation transfers the winner and accumulated lessons forward, and search space narrowing prunes values that consistently hurt. Transfer evaluation across different backend noise profiles ensures the ratchet learned general principles, not hardware-specific quirks.
|
||||
|
||||
**Key insight:** The ratchet compresses hours of manual parameter exploration into minutes of automated search. Each rung produces human-readable lessons and machine-readable rules that make future exploration more efficient — a self-improving loop.
|
||||
|
||||
## The Arc
|
||||
|
||||
Plan A's progression mirrors the research pipeline itself: understand the physics (what are we building?), build the instrumentation (how do we measure success?), then automate the search (let the machine find the best settings). By the end, the learner has seen every number the harness produces and knows exactly what it means.
|
||||
31
notebooks/plan_b/OVERVIEW.md
Normal file
31
notebooks/plan_b/OVERVIEW.md
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
# Plan B — Spiral: Three Passes Through the Same Machine
|
||||
|
||||
## Overarching Theme
|
||||
|
||||
Plan B is built on a single pedagogical bet: you learn a complex system best by seeing it three times at increasing depth. The [[4,2,2]] encoded magic state pipeline — from T-state preparation through error detection to automated ratchet optimisation — is presented as a single notebook with three concentric passes. Each pass covers the *entire* system, but what was a black box in Pass 1 becomes transparent machinery in Pass 2 and a tool you drive in Pass 3.
|
||||
|
||||
## The Three Building Blocks
|
||||
|
||||
### Pass 1: The 5-Minute Demo — *Run the machine, see it work, get curious.*
|
||||
|
||||
The learner loads a config, runs a ratchet step, and sees a winner emerge with a score and a lesson narrative. No explanation — just output. A bar chart shows score variation across experiments. The point: something interesting is happening, and every number on screen is a question waiting to be asked.
|
||||
|
||||
**What you see:** A JSON step result, a winning margin, a lesson string, a score landscape bar chart. None of it makes sense yet — and that's the design.
|
||||
|
||||
### Pass 2: Opening the Black Box — *Build understanding from the ground up, but always connecting back.*
|
||||
|
||||
Now we rewind. The T-state is built from scratch on the Bloch sphere. The [[4,2,2]] encoding spreads it across 4 qubits. Stabilisers act as checksums. Syndrome extraction via ancillas enables postselection without destroying the state. Noise from IBM Brisbane degrades acceptance and witness. The magic witness formula and scoring function are computed by hand.
|
||||
|
||||
Every number from Pass 1 is re-encountered with full physical meaning: "that bar chart score of 0.058 came from quality 0.73 × acceptance 0.77 / cost 9.54." The challengers from the ratchet step are shown as single-parameter mutations of the incumbent. Promotion requires beating the incumbent by a margin.
|
||||
|
||||
**Key insight:** Pass 2 doesn't introduce new material — it reveals the structure that was always there in the Pass 1 output. The learner's own curiosity from Pass 1 drives engagement.
|
||||
|
||||
### Pass 3: Making It Your Own — *Modify parameters, compare scoring functions, design experiments.*
|
||||
|
||||
The learner now drives: narrowing the search space, comparing WAC vs factory throughput scoring, running multi-step rungs with patience, visualising exploration trajectories, and testing transfer across backends. Code challenges ask the learner to compute cumulative best scores, create search rules, and design custom experiments that compete against the ratchet's winner.
|
||||
|
||||
**Key insight:** The same system that was opaque in Pass 1 and explained in Pass 2 is now a tool the learner can bend to their own questions. The spiral completes: each pass through the same material reveals structure that was invisible before.
|
||||
|
||||
## The Arc
|
||||
|
||||
The spiral structure mirrors how the ratchet itself works — each step builds on what came before, ratcheting toward deeper understanding. Pass 1 creates questions. Pass 2 answers them. Pass 3 shows that the answers are levers you can pull. The magic state preparation pipeline is both the subject and the metaphor.
|
||||
39
notebooks/plan_c/OVERVIEW.md
Normal file
39
notebooks/plan_c/OVERVIEW.md
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
# Plan C — Parallel Tracks: Choose Your Own Depth
|
||||
|
||||
## Overarching Theme
|
||||
|
||||
Plan C decomposes the [[4,2,2]] encoded magic state pipeline into three independent tracks — physics, engineering, and search — connected by a shared interactive dashboard. The learner chooses their entry point based on what they already know or what they most want to understand. Each track goes deep into one dimension of the problem; the dashboard lets them see how changes in one dimension affect the others.
|
||||
|
||||
This structure reflects a real truth about magic state preparation for Toffoli scalability: the physics (can the code protect the state?), the engineering (can the circuit survive hardware noise?), and the optimisation (can we find the best settings automatically?) are separable concerns that interact through a single scoring function.
|
||||
|
||||
## The Four Building Blocks
|
||||
|
||||
### Dashboard: The Control Room
|
||||
|
||||
An interactive widget-based interface where the learner adjusts parameters (seed style, encoder, verification, postselection, optimisation level, shots) and immediately sees the effect on four panels: circuit diagram, measurement histogram, quality metrics, and cost stats. The "Compare" button overlays multiple runs for side-by-side analysis.
|
||||
|
||||
**Role:** The dashboard is the empirical workbench. Each track sends the learner back here with specific "Dashboard Exercises" that make abstract concepts concrete.
|
||||
|
||||
### Track A: Physics — *The quantum error-detecting code*
|
||||
|
||||
Pure quantum mechanics, no optimisation. Covers the Eastin-Knill theorem (why you need magic states), the T-state on the Bloch sphere, three equivalent preparations, the [[4,2,2]] stabiliser code (XXXX, ZZZZ), logical operators (X_L, Y_L, Z_spectator), the encoding circuit, complete error detection (12/12 single-qubit errors), and the magic witness formula W with its sharp sensitivity peak.
|
||||
|
||||
**Key insight:** The witness formula's sharp peak at ⟨X_L⟩ = ⟨Y_L⟩ = 1/√2 means even moderate noise produces a noticeable drop — this sensitivity is what makes the witness a good diagnostic and what makes optimisation worthwhile.
|
||||
|
||||
### Track B: Engineering — *Noise, transpilation, and cost*
|
||||
|
||||
The bridge between ideal theory and noisy hardware. Covers noise models (IBM Brisbane's gate errors, readout errors, decoherence), transpilation at different optimisation levels (logical-to-physical gate mapping, SWAP insertion), the cost model (2Q gates dominate at 10–100× the error rate of 1Q gates), acceptance rate under noise, noisy fidelity via density matrix, failure mode classification, and the full scoring formula with its three-way tension.
|
||||
|
||||
Introduces factory throughput as an alternative scorer that penalises circuit cost more heavily — the right choice when you're running a magic state production pipeline rather than optimising individual states.
|
||||
|
||||
**Key insight:** Higher transpiler optimisation levels generally reduce gate count, but the effect is non-monotonic — aggressive routing can place operations on noisier qubit connections. The "best" level is an empirical question, not a theoretical one.
|
||||
|
||||
### Track C: Search — *Optimisation and the ratchet*
|
||||
|
||||
The automation layer. Covers the parameter space (6 dimensions, ~324 combinations), the incumbent-challenger model (monotonic guarantee), NeighborWalk (single-axis, systematic), RandomCombo (multi-axis, discovers interactions), evaluation and promotion (cheap margin threshold), full rungs with patience, lesson extraction (fix/avoid rules with confidence), LessonGuided search (rule-biased challenger generation), search space narrowing, cross-rung propagation, and transfer evaluation.
|
||||
|
||||
**Key insight:** The three search strategies form a progression: NeighborWalk identifies which individual parameter matters most, RandomCombo finds multi-parameter synergies, and LessonGuided focuses future search using accumulated evidence. Together they explore the space efficiently without exhaustive enumeration.
|
||||
|
||||
## The Arc
|
||||
|
||||
Plan C trusts the learner to navigate. A physicist can start at Track A and skip to the dashboard to see the numbers they just derived change under noise. An engineer can start at Track B and understand cost before caring about stabiliser algebra. A computer scientist can start at Track C and see the optimisation loop before understanding what's being optimised. The dashboard unifies all three perspectives into a single interactive view — the same view the ratchet uses internally to evaluate experiments.
|
||||
46
notebooks/plan_d/OVERVIEW.md
Normal file
46
notebooks/plan_d/OVERVIEW.md
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
# Plan D — Hypothesis-Driven: Precision-Engineering the Magic State Factory
|
||||
|
||||
## Overarching Theme
|
||||
|
||||
The Toffoli gate — the workhorse of fault-tolerant quantum arithmetic — consumes magic states. Every Toffoli decomposition burns multiple |T⟩ states via gate teleportation. At scale, this creates a **supply-chain bottleneck**: a useful quantum algorithm may need millions of high-fidelity magic states, each of which must be prepared, encoded, verified, and distilled before consumption.
|
||||
|
||||
Plan D puts the **preparation stage** of that pipeline under a microscope using the experimental method: hypothesis → claims → proof.
|
||||
|
||||
## The Three Building Blocks
|
||||
|
||||
### Experiment 1: Protection — *Can we even build the product?*
|
||||
|
||||
Proves the [[4,2,2]] code can encode |T⟩ with W=1.0, detect all 12 single-qubit errors, and postselect cleanly. This is the **existence proof**: the factory blueprint works in principle.
|
||||
|
||||
- Magic witness W = 1.0 (perfect preservation)
|
||||
- Both stabilisers (XXXX, ZZZZ) at +1
|
||||
- 12/12 single-qubit Pauli errors detected
|
||||
- 100% acceptance on ideal simulator
|
||||
|
||||
### Experiment 2: Noise — *How does the factory perform under real conditions?*
|
||||
|
||||
Under IBM Brisbane noise, quality and yield both drop. But critically, the score varies 2–5× across parameter choices (transpiler level alone). This proves that **minor knob-turns in the preparation circuit have outsized effects on output quality** — the creation process is sensitive enough that optimisation is both necessary and worthwhile.
|
||||
|
||||
- Noise reduces W below 1.0 and acceptance below 100%
|
||||
- The scoring formula score = quality × acceptance / cost captures the three-way trade-off
|
||||
- Parameter sweep reveals significant score variation across optimisation levels
|
||||
|
||||
### Experiment 3: Optimisation — *Can we automate the tuning?*
|
||||
|
||||
A ratchet optimizer searches the 6+ dimensional parameter space (seed style, encoder, verification, postselection, transpiler settings), monotonically improving and extracting fix/avoid rules. The winning configuration transfers to unseen backends — meaning it learned **general principles of magic state preparation**, not noise-specific hacks.
|
||||
|
||||
- Ratchet improves monotonically (incumbent never gets worse)
|
||||
- Actionable lessons extracted (fix/avoid rules with confidence scores)
|
||||
- Winning configuration beats the manual default
|
||||
- Configuration transfers to different noise contexts
|
||||
|
||||
## Why This Matters for Toffoli Scalability
|
||||
|
||||
The Toffoli consumption problem is ultimately a **throughput × fidelity** problem. If each magic state arriving at the Toffoli teleportation step is slightly noisier than needed, you either:
|
||||
|
||||
- Need more rounds of distillation (exponential overhead), or
|
||||
- Accept lower gate fidelity (computation fails)
|
||||
|
||||
By showing that small adjustments to the preparation circuit — encoder style, verification strategy, transpiler level — produce 2–5× score differences, Plan D demonstrates that **the bottleneck is addressable at the source**. You don't just distill harder; you prepare smarter. The ratchet automates finding those smarter settings, and the fact that its lessons transfer means you can pre-optimize before ever touching real hardware.
|
||||
|
||||
**In short:** Plan D proves that magic state creation is a tunable, optimizable process — not a fixed-cost overhead — and that's the lever for making Toffoli-heavy computation scale.
|
||||
Loading…
Reference in a new issue