{ "cells": [ { "cell_type": "markdown", "id": "751fe8cc", "metadata": {}, "source": [ "# Notebook 3: The Ratchet Learns For You\n", "\n", "**Plan A \u2014 Automated Search**\n", "\n", "You now know what an encoded magic state is (Notebook 1) and how to measure its quality (Notebook 2). This notebook shows how the **autoresearch ratchet** automatically explores the parameter space to find the best circuit configuration.\n", "\n", "**What you will learn:**\n", "1. The incumbent-challenger optimization model\n", "2. How challengers are generated (neighbor walk, random combo, lesson-guided)\n", "3. How the ratchet selects winners and extracts lessons\n", "4. Cross-rung propagation and search space narrowing" ] }, { "cell_type": "code", "execution_count": 1, "id": "3f9b56a6", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "All imports successful.\n" ] } ], "source": [ "%matplotlib inline\n", "import sys, warnings, tempfile\n", "warnings.filterwarnings(\"ignore\")\n", "\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "from math import sqrt\n", "\n", "from autoresearch_quantum.models import (\n", " ExperimentSpec, RungConfig, EvaluationMetrics,\n", " QualityWeights, CostWeights, ScoreConfig, SearchSpaceConfig,\n", " TierPolicyConfig, HardwareConfig, LessonFeedback, SearchRule,\n", ")\n", "from autoresearch_quantum.execution.local import LocalCheapExecutor\n", "from autoresearch_quantum.search.challengers import (\n", " generate_neighbor_challengers, mutation_summary, GeneratedChallenger,\n", ")\n", "from autoresearch_quantum.search.strategies import (\n", " NeighborWalk, RandomCombo, LessonGuided, CompositeGenerator,\n", " default_composite, StrategyWeight,\n", ")\n", "from autoresearch_quantum.ratchet.runner import AutoresearchHarness\n", "from autoresearch_quantum.persistence.store import ResearchStore\n", "from autoresearch_quantum.config import load_rung_config\n", "from autoresearch_quantum.lessons.extractor import extract_rung_lesson\n", "from autoresearch_quantum.lessons.feedback import (\n", " extract_search_rules, narrow_search_space, build_lesson_feedback,\n", ")\n", "from autoresearch_quantum.execution.transfer import TransferEvaluator\n", "\n", "print(\"All imports successful.\")" ] }, { "cell_type": "code", "execution_count": 2, "id": "7cb035ce", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Learning tracker active.\n" ] } ], "source": [ "from autoresearch_quantum.teaching import LearningTracker\n", "from autoresearch_quantum.teaching.assess import quiz, predict_choice, reflect, order, checkpoint_summary\n", "tracker = LearningTracker(\"plan_a_03\")\n", "print(\"Learning tracker active.\")" ] }, { "cell_type": "markdown", "id": "5fefc4e4", "metadata": {}, "source": [ "---\n", "## 1. The Incumbent-Challenger Model\n", "\n", "The ratchet keeps a **best-so-far** configuration called the **incumbent**. Each step:\n", "\n", "1. Generate **challengers** \u2014 new configurations that differ from the incumbent in one or more parameters\n", "2. Evaluate each challenger on the cheap tier (noisy simulator)\n", "3. If any challenger beats the incumbent by a margin, it becomes the new incumbent\n", "4. Repeat until patience runs out\n", "\n", "This is a form of **local search** \u2014 like hill climbing in parameter space." ] }, { "cell_type": "code", "execution_count": 3, "id": "e563c118", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Bootstrap incumbent:\n", " seed_style: h_p\n", " encoder_style: cx_chain\n", " verification: both\n", " postselection: all_measured\n", " optimization_level: 2\n", " target_backend: fake_brisbane\n", "\n", "Search space dimensions:\n", " seed_style: ['h_p', 'ry_rz', 'u_magic']\n", " encoder_style: ['cx_chain', 'cz_compiled']\n", " verification: ['both', 'z_only', 'x_only']\n", " postselection: ['all_measured', 'z_only', 'none']\n", " ancilla_strategy: ['dedicated_pair', 'reused_single']\n", " optimization_level: [1, 2, 3]\n", "\n", "Max challengers per step: 8\n" ] } ], "source": [ "# Load the rung1 configuration\n", "rung_config = load_rung_config(\"../../configs/rungs/rung1.yaml\")\n", "\n", "# The bootstrap incumbent\n", "incumbent_spec = rung_config.bootstrap_incumbent\n", "print(\"Bootstrap incumbent:\")\n", "print(f\" seed_style: {incumbent_spec.seed_style}\")\n", "print(f\" encoder_style: {incumbent_spec.encoder_style}\")\n", "print(f\" verification: {incumbent_spec.verification}\")\n", "print(f\" postselection: {incumbent_spec.postselection}\")\n", "print(f\" optimization_level: {incumbent_spec.optimization_level}\")\n", "print(f\" target_backend: {incumbent_spec.target_backend}\")\n", "print(f\"\\nSearch space dimensions:\")\n", "for dim, values in rung_config.search_space.dimensions.items():\n", " print(f\" {dim}: {values}\")\n", "print(f\"\\nMax challengers per step: {rung_config.search_space.max_challengers_per_step}\")" ] }, { "cell_type": "markdown", "id": "d4044fc8", "metadata": {}, "source": [ "### The ratchet guarantee\n", "\n", "The key property: the incumbent **never gets worse**. A challenger must demonstrably beat the incumbent to replace it." ] }, { "cell_type": "code", "execution_count": 4, "id": "1f48aa77", "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "47c3d5bbac3d4fd4ab7c8c57b4432b17", "version_major": 2, "version_minor": 0 }, "text/plain": [ "VBox(children=(HTML(value='