mirror of
https://github.com/saymrwulf/NTT-learning.git
synced 2026-05-14 20:47:53 +00:00
Align Jupyter lifecycle and add schedule visuals
This commit is contained in:
parent
edf1b69cf5
commit
13dd499166
18 changed files with 860 additions and 174 deletions
1
.gitignore
vendored
1
.gitignore
vendored
|
|
@ -1,6 +1,7 @@
|
|||
.venv/
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
.logs/
|
||||
.run/
|
||||
.cache/
|
||||
.ipython/
|
||||
|
|
|
|||
43
README.md
43
README.md
|
|
@ -6,12 +6,16 @@ The project is built around one supported learner route:
|
|||
|
||||
1. `notebooks/START_HERE.ipynb`
|
||||
2. `notebooks/COURSE_BLUEPRINT.ipynb`
|
||||
3. `notebooks/foundations/01_convolution_to_toy_ntt/lecture.ipynb`
|
||||
4. `notebooks/foundations/01_convolution_to_toy_ntt/lab.ipynb`
|
||||
5. `notebooks/foundations/01_convolution_to_toy_ntt/problems.ipynb`
|
||||
6. `notebooks/foundations/01_convolution_to_toy_ntt/studio.ipynb`
|
||||
3. Each bundle in `Lecture -> Lab -> Problems -> Studio` order:
|
||||
- `notebooks/foundations/01_convolution_to_toy_ntt/`
|
||||
- `notebooks/foundations/02_negative_wrapped_ntt/`
|
||||
- `notebooks/butterfly_mechanics/03_fast_forward_ct/`
|
||||
- `notebooks/butterfly_mechanics/04_fast_inverse_gs/`
|
||||
- `notebooks/kyber_mapping/05_kyber_ntt_and_base_multiplication/`
|
||||
- `notebooks/professional/06_debugging_ntt_failures/`
|
||||
4. `notebooks/COURSE_COMPLETE.ipynb`
|
||||
|
||||
The current build intentionally starts with convolution, negacyclic folding, a tiny toy NTT, and butterfly mechanics before any Kyber-specific implementation details.
|
||||
The current build covers convolution, negacyclic folding, direct `NTTψ` / `INTTψ`, fast CT/GS butterfly schedules, ordering and scaling, Kyber modulus reality, base multiplication, and debugging fingerprints.
|
||||
|
||||
## Notebook Contract
|
||||
|
||||
|
|
@ -30,27 +34,30 @@ Route notebooks stay pure route notebooks. They contain `META` and `MANDATORY` c
|
|||
|
||||
## Local Operations
|
||||
|
||||
All repo-local operations live in `scripts/`:
|
||||
The lifecycle source of truth is `scripts/app.sh`:
|
||||
|
||||
- `scripts/bootstrap.sh`
|
||||
- `scripts/start.sh`
|
||||
- `scripts/stop.sh`
|
||||
- `scripts/restart.sh`
|
||||
- `scripts/status.sh`
|
||||
- `scripts/reset-state.sh`
|
||||
- `scripts/validate.sh`
|
||||
- `scripts/app.sh bootstrap`
|
||||
- `scripts/app.sh start`
|
||||
- `scripts/app.sh start --foreground`
|
||||
- `scripts/app.sh stop`
|
||||
- `scripts/app.sh restart`
|
||||
- `scripts/app.sh status`
|
||||
- `scripts/app.sh logs -f`
|
||||
|
||||
Compatibility wrappers remain in `scripts/bootstrap.sh`, `scripts/start.sh`, `scripts/stop.sh`, `scripts/restart.sh`, `scripts/status.sh`, `scripts/reset-state.sh`, and `scripts/validate.sh`.
|
||||
|
||||
Typical first run:
|
||||
|
||||
```bash
|
||||
scripts/bootstrap.sh
|
||||
scripts/validate.sh
|
||||
scripts/start.sh
|
||||
bash scripts/app.sh bootstrap
|
||||
bash scripts/app.sh validate
|
||||
bash scripts/app.sh start --no-open
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- The repo uses a local `.venv`.
|
||||
- Jupyter state is isolated inside the repo with `.jupyter_config`, `.jupyter_data`, `.jupyter_runtime`, `.ipython`, `.cache`, and `.logs`.
|
||||
- The Jupyter kernel is installed into the venv with `--sys-prefix`, not `--user`.
|
||||
- Validation is designed to work with the standard library first, so structure and notebook execution checks can run before richer notebook tooling is installed.
|
||||
- JupyterLab is declared in `pyproject.toml` and installed by `scripts/bootstrap.sh`.
|
||||
|
||||
- JupyterLab and ipykernel are declared in `pyproject.toml` and installed by `scripts/app.sh bootstrap`.
|
||||
|
|
|
|||
|
|
@ -24,6 +24,20 @@
|
|||
},
|
||||
"source": "## MANDATORY | difficulty 3 | CT Is A Schedule For Reusing Work\n\nThe point of the CT butterfly is not to invent a new transform.\nThe point is to compute the same transform by reusing shared bracket terms instead of recomputing everything from scratch.\n"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pedagogy": {
|
||||
"role": "mandatory",
|
||||
"difficulty": 3,
|
||||
"kind": "demo",
|
||||
"title": "See The Schedule Geometry Before The Numbers"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": "# MANDATORY | difficulty 3 | See The Schedule Geometry Before The Numbers\n\nfrom IPython.display import display\n\nfrom ntt_learning.visuals import plot_stage_pairing_map, plot_stage_schedule\n\ndisplay(plot_stage_schedule(8, title=\"CT schedule skeleton for n=8\"))\ndisplay(plot_stage_pairing_map(8, 2, title=\"Stage 1 pairings for n=8\"))\ndisplay(plot_stage_pairing_map(8, 4, title=\"Stage 2 pairings for n=8\"))\ndisplay(plot_stage_pairing_map(8, 8, title=\"Stage 3 pairings for n=8\"))\n"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
|
|
|
|||
|
|
@ -24,6 +24,20 @@
|
|||
},
|
||||
"source": "## MANDATORY | difficulty 3 | Kyber Is Not \u201cJust Generic Negacyclic NTT With Big Numbers\u201d\n\nThe key arithmetic reality is:\n\n- Kyber v3 has `n = 256`\n- `q = 3329`\n- `256` divides `3328`\n- `512` does **not** divide `3328`\n\nThat means a primitive `256`-th root exists, but a primitive `512`-th root does not.\nSo the clean full-length `\u03c8` story from the toy negative-wrapped transform does not lift over unchanged.\n"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pedagogy": {
|
||||
"role": "mandatory",
|
||||
"difficulty": 3,
|
||||
"kind": "demo",
|
||||
"title": "See The Stage Skeleton For n=16 And n=256"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": "# MANDATORY | difficulty 3 | See The Stage Skeleton For n=16 And n=256\n\nfrom IPython.display import display\n\nfrom ntt_learning.visuals import plot_stage_pairing_map, plot_stage_schedule\n\ndisplay(plot_stage_schedule(16, title=\"Readable schedule skeleton for n=16\"))\ndisplay(plot_stage_pairing_map(16, 2, title=\"Stage 1 geometry for n=16\"))\ndisplay(plot_stage_pairing_map(16, 16, title=\"Final stage geometry for n=16\"))\ndisplay(plot_stage_schedule(256, title=\"Kyber-scale stage skeleton for n=256\"))\n"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
|
|
|
|||
|
|
@ -43,6 +43,7 @@ NOTEBOOK_SEQUENCE = [
|
|||
]
|
||||
|
||||
REQUIRED_SCRIPT_NAMES = [
|
||||
"app.sh",
|
||||
"bootstrap.sh",
|
||||
"start.sh",
|
||||
"stop.sh",
|
||||
|
|
|
|||
|
|
@ -22,6 +22,7 @@ from .toy_ntt import (
|
|||
ntt_psi_matrix,
|
||||
pairwise_product_grid,
|
||||
pointwise_multiply,
|
||||
stage_pairings,
|
||||
wraparound_contributions,
|
||||
)
|
||||
|
||||
|
|
@ -380,6 +381,120 @@ def plot_butterfly_network(trace: TransformTrace, title: str | None = None):
|
|||
return fig
|
||||
|
||||
|
||||
def plot_stage_pairing_map(
|
||||
length: int,
|
||||
block_size: int,
|
||||
*,
|
||||
title: str | None = None,
|
||||
):
|
||||
"""Plot which indices talk to each other in one butterfly stage."""
|
||||
if title is None:
|
||||
title = f"Stage Pairing Map (n={length}, block={block_size})"
|
||||
|
||||
pairs = stage_pairings(length, block_size)
|
||||
fig, ax = plt.subplots(figsize=(10, max(4.2, length * 0.55)))
|
||||
ax.set_title(title, fontsize=14, fontweight="bold")
|
||||
ax.axis("off")
|
||||
|
||||
for index in range(length):
|
||||
ax.text(
|
||||
0,
|
||||
-index,
|
||||
f"{index}",
|
||||
ha="center",
|
||||
va="center",
|
||||
fontsize=10,
|
||||
family="monospace",
|
||||
bbox={"boxstyle": "round,pad=0.25", "facecolor": "#edf6f9", "edgecolor": "#264653"},
|
||||
)
|
||||
|
||||
colors = ["#264653", "#2a9d8f", "#e76f51", "#8d99ae", "#c1121f", "#3a86ff"]
|
||||
for pair_index, (left, right) in enumerate(pairs):
|
||||
color = colors[pair_index % len(colors)]
|
||||
ax.plot([0.6, 4.2], [-left, -right], color=color, linewidth=2.4)
|
||||
ax.text(
|
||||
5.4,
|
||||
-((left + right) / 2),
|
||||
f"{left} <-> {right}",
|
||||
ha="center",
|
||||
va="center",
|
||||
fontsize=9,
|
||||
family="monospace",
|
||||
bbox={"boxstyle": "round,pad=0.22", "facecolor": "#ffffff", "edgecolor": color},
|
||||
)
|
||||
|
||||
ax.text(0, 1.0, "indices", ha="center", va="center", fontsize=11, fontweight="bold")
|
||||
ax.text(5.4, 1.0, "pairs", ha="center", va="center", fontsize=11, fontweight="bold")
|
||||
ax.set_xlim(-1.0, 6.8)
|
||||
ax.set_ylim(-length + 0.2, 1.8)
|
||||
fig.tight_layout()
|
||||
return fig
|
||||
|
||||
|
||||
def plot_stage_schedule(length: int, title: str | None = None):
|
||||
"""Plot the full stage schedule for a power-of-two transform length."""
|
||||
if length <= 0 or length & (length - 1):
|
||||
raise ValueError("plot_stage_schedule requires a power-of-two length")
|
||||
if title is None:
|
||||
title = f"Butterfly Stage Schedule For n={length}"
|
||||
|
||||
stages = []
|
||||
block_size = 2
|
||||
stage_index = 1
|
||||
while block_size <= length:
|
||||
stages.append(
|
||||
{
|
||||
"stage": stage_index,
|
||||
"block_size": block_size,
|
||||
"pair_distance": block_size // 2,
|
||||
"pair_count": len(stage_pairings(length, block_size)),
|
||||
}
|
||||
)
|
||||
block_size *= 2
|
||||
stage_index += 1
|
||||
|
||||
fig, ax = plt.subplots(figsize=(11, max(4.5, len(stages) * 0.95)))
|
||||
ax.set_title(title, fontsize=14, fontweight="bold")
|
||||
ax.axis("off")
|
||||
|
||||
headers = ["stage", "block", "distance", "pairs", "what happens"]
|
||||
x_positions = [0, 2.0, 4.0, 6.0, 9.2]
|
||||
for x, header in zip(x_positions, headers):
|
||||
ax.text(x, 1.0, header, ha="center", va="center", fontsize=11, fontweight="bold")
|
||||
|
||||
for row_index, row in enumerate(stages):
|
||||
y = -row_index
|
||||
explanation = f"indices {row['pair_distance']} apart talk inside blocks of {row['block_size']}"
|
||||
values = [
|
||||
str(row["stage"]),
|
||||
str(row["block_size"]),
|
||||
str(row["pair_distance"]),
|
||||
str(row["pair_count"]),
|
||||
explanation,
|
||||
]
|
||||
for x, value in zip(x_positions, values):
|
||||
ax.text(
|
||||
x,
|
||||
y,
|
||||
value,
|
||||
ha="center",
|
||||
va="center",
|
||||
fontsize=10,
|
||||
family="monospace",
|
||||
bbox={
|
||||
"boxstyle": "round,pad=0.24",
|
||||
"facecolor": "#edf6f9" if x < 8 else "#ffffff",
|
||||
"edgecolor": "#264653",
|
||||
"linewidth": 1.0,
|
||||
},
|
||||
)
|
||||
|
||||
ax.set_xlim(-1.0, 12.3)
|
||||
ax.set_ylim(-len(stages) + 0.2, 1.7)
|
||||
fig.tight_layout()
|
||||
return fig
|
||||
|
||||
|
||||
def plot_stage(stage: TransformStage, title: str | None = None):
|
||||
"""Plot one explicit butterfly stage with input and output rows."""
|
||||
if title is None:
|
||||
|
|
|
|||
|
|
@ -9,6 +9,7 @@ description = "Local-first notebooks for learning the Number Theoretic Transform
|
|||
readme = "README.md"
|
||||
requires-python = ">=3.12"
|
||||
dependencies = [
|
||||
"ipykernel>=6.29",
|
||||
"jupyterlab>=4.2,<5",
|
||||
"ipywidgets>=8.1,<9",
|
||||
"matplotlib>=3.9,<4",
|
||||
|
|
@ -21,4 +22,3 @@ dev = [
|
|||
|
||||
[tool.setuptools]
|
||||
packages = ["ntt_learning"]
|
||||
|
||||
|
|
|
|||
625
scripts/app.sh
Executable file
625
scripts/app.sh
Executable file
|
|
@ -0,0 +1,625 @@
|
|||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
VENV_DIR="$PROJECT_ROOT/.venv"
|
||||
PYTHON="$VENV_DIR/bin/python"
|
||||
JUPYTER="$VENV_DIR/bin/jupyter"
|
||||
START_NOTEBOOK="notebooks/START_HERE.ipynb"
|
||||
|
||||
LOG_DIR="$PROJECT_ROOT/.logs"
|
||||
RUN_DIR="$PROJECT_ROOT/.run"
|
||||
PID_FILE="$LOG_DIR/jupyter.pid"
|
||||
LEGACY_PID_FILE="$RUN_DIR/jupyter.pid"
|
||||
LOG_FILE="$LOG_DIR/jupyterlab.log"
|
||||
|
||||
JUPYTER_CONFIG_DIR="$PROJECT_ROOT/.jupyter_config"
|
||||
JUPYTER_DATA_DIR="$PROJECT_ROOT/.jupyter_data"
|
||||
JUPYTER_RUNTIME_DIR="$PROJECT_ROOT/.jupyter_runtime"
|
||||
IPYTHONDIR="$PROJECT_ROOT/.ipython"
|
||||
MPLCONFIGDIR="$PROJECT_ROOT/.cache/matplotlib"
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage:
|
||||
bash scripts/app.sh bootstrap
|
||||
bash scripts/app.sh start [--foreground] [--no-open] [--port PORT]
|
||||
bash scripts/app.sh stop
|
||||
bash scripts/app.sh restart [start args...]
|
||||
bash scripts/app.sh status
|
||||
bash scripts/app.sh logs [-f]
|
||||
|
||||
Optional compatibility commands:
|
||||
bash scripts/app.sh validate
|
||||
bash scripts/app.sh reset-state
|
||||
EOF
|
||||
}
|
||||
|
||||
ensure_dirs() {
|
||||
mkdir -p \
|
||||
"$LOG_DIR" \
|
||||
"$RUN_DIR" \
|
||||
"$JUPYTER_CONFIG_DIR" \
|
||||
"$JUPYTER_DATA_DIR" \
|
||||
"$JUPYTER_RUNTIME_DIR" \
|
||||
"$IPYTHONDIR" \
|
||||
"$MPLCONFIGDIR"
|
||||
}
|
||||
|
||||
set_jupyter_env() {
|
||||
export JUPYTER_CONFIG_DIR
|
||||
export JUPYTER_DATA_DIR
|
||||
export JUPYTER_RUNTIME_DIR
|
||||
export IPYTHONDIR
|
||||
export MPLCONFIGDIR
|
||||
}
|
||||
|
||||
pid_alive() {
|
||||
local pid="${1:-}"
|
||||
[[ -n "$pid" ]] || return 1
|
||||
kill -0 "$pid" 2>/dev/null
|
||||
}
|
||||
|
||||
clear_pid_files() {
|
||||
rm -f "$PID_FILE" "$LEGACY_PID_FILE"
|
||||
}
|
||||
|
||||
write_pid_files() {
|
||||
local pid="$1"
|
||||
ensure_dirs
|
||||
printf '%s\n' "$pid" > "$PID_FILE"
|
||||
printf '%s\n' "$pid" > "$LEGACY_PID_FILE"
|
||||
}
|
||||
|
||||
get_running_pid() {
|
||||
local candidate pid
|
||||
for candidate in "$PID_FILE" "$LEGACY_PID_FILE"; do
|
||||
[[ -f "$candidate" ]] || continue
|
||||
pid="$(<"$candidate")"
|
||||
if pid_alive "$pid"; then
|
||||
printf '%s\n' "$pid"
|
||||
return 0
|
||||
fi
|
||||
if runtime_json_alive_for_pid "$pid"; then
|
||||
printf '%s\n' "$pid"
|
||||
return 0
|
||||
fi
|
||||
rm -f "$candidate"
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
jupyter_installed() {
|
||||
[[ -x "$PYTHON" ]] || return 1
|
||||
"$PYTHON" -c "import jupyterlab, ipykernel" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
cleanup_stale_runtime() {
|
||||
[[ -x "$PYTHON" ]] || return 0
|
||||
ensure_dirs
|
||||
"$PYTHON" - "$JUPYTER_RUNTIME_DIR" <<'PY'
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
runtime_dir = Path(sys.argv[1])
|
||||
for path in runtime_dir.glob("jpserver-*.json"):
|
||||
try:
|
||||
payload = json.loads(path.read_text(encoding="utf-8"))
|
||||
pid = int(payload.get("pid", -1))
|
||||
os.kill(pid, 0)
|
||||
except (FileNotFoundError, ProcessLookupError, ValueError, json.JSONDecodeError, TypeError):
|
||||
path.unlink(missing_ok=True)
|
||||
path.with_name(f"{path.stem}-open.html").unlink(missing_ok=True)
|
||||
except PermissionError:
|
||||
pass
|
||||
PY
|
||||
}
|
||||
|
||||
latest_active_runtime_json() {
|
||||
[[ -x "$PYTHON" ]] || return 1
|
||||
"$PYTHON" - "$JUPYTER_RUNTIME_DIR" <<'PY'
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
runtime_dir = Path(sys.argv[1])
|
||||
paths = sorted(
|
||||
runtime_dir.glob("jpserver-*.json"),
|
||||
key=lambda candidate: candidate.stat().st_mtime,
|
||||
reverse=True,
|
||||
)
|
||||
|
||||
for path in paths:
|
||||
try:
|
||||
payload = json.loads(path.read_text(encoding="utf-8"))
|
||||
pid = int(payload.get("pid", -1))
|
||||
os.kill(pid, 0)
|
||||
except (ProcessLookupError, ValueError, json.JSONDecodeError, TypeError, FileNotFoundError):
|
||||
continue
|
||||
except PermissionError:
|
||||
pass
|
||||
print(path)
|
||||
sys.exit(0)
|
||||
|
||||
sys.exit(1)
|
||||
PY
|
||||
}
|
||||
|
||||
runtime_json_for_pid() {
|
||||
local pid="${1:-}"
|
||||
local candidate
|
||||
[[ -n "$pid" ]] || return 1
|
||||
|
||||
candidate="$JUPYTER_RUNTIME_DIR/jpserver-$pid.json"
|
||||
if [[ -f "$candidate" ]]; then
|
||||
printf '%s\n' "$candidate"
|
||||
return 0
|
||||
fi
|
||||
|
||||
latest_active_runtime_json
|
||||
}
|
||||
|
||||
runtime_json_alive_for_pid() {
|
||||
local pid="${1:-}"
|
||||
local runtime_json api_url
|
||||
|
||||
[[ -n "$pid" ]] || return 1
|
||||
runtime_json="$JUPYTER_RUNTIME_DIR/jpserver-$pid.json"
|
||||
[[ -f "$runtime_json" ]] || return 1
|
||||
|
||||
if ! command -v curl >/dev/null 2>&1; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
api_url="$(runtime_json_to_api_url "$runtime_json")" || return 1
|
||||
curl --silent --show-error --max-time 2 "$api_url" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
runtime_json_to_access_url() {
|
||||
local runtime_json="$1"
|
||||
"$PYTHON" -c '
|
||||
import json
|
||||
import sys
|
||||
|
||||
with open(sys.argv[1], encoding="utf-8") as handle:
|
||||
payload = json.load(handle)
|
||||
|
||||
url = payload["url"].rstrip("/")
|
||||
token = payload.get("token", "")
|
||||
target = f"{url}/lab/tree/notebooks/START_HERE.ipynb"
|
||||
print(f"{target}?token={token}" if token else target)
|
||||
' "$runtime_json"
|
||||
}
|
||||
|
||||
runtime_json_to_api_url() {
|
||||
local runtime_json="$1"
|
||||
"$PYTHON" -c '
|
||||
import json
|
||||
import sys
|
||||
|
||||
with open(sys.argv[1], encoding="utf-8") as handle:
|
||||
payload = json.load(handle)
|
||||
|
||||
url = payload["url"].rstrip("/")
|
||||
token = payload.get("token", "")
|
||||
target = f"{url}/api"
|
||||
print(f"{target}?token={token}" if token else target)
|
||||
' "$runtime_json"
|
||||
}
|
||||
|
||||
runtime_json_to_port() {
|
||||
local runtime_json="$1"
|
||||
"$PYTHON" -c '
|
||||
import json
|
||||
import sys
|
||||
|
||||
with open(sys.argv[1], encoding="utf-8") as handle:
|
||||
payload = json.load(handle)
|
||||
|
||||
print(payload.get("port", ""))
|
||||
' "$runtime_json"
|
||||
}
|
||||
|
||||
server_access_url() {
|
||||
local runtime_json
|
||||
if [[ $# -gt 0 ]]; then
|
||||
runtime_json="$(runtime_json_for_pid "$1")" || return 1
|
||||
else
|
||||
runtime_json="$(latest_active_runtime_json)" || return 1
|
||||
fi
|
||||
runtime_json_to_access_url "$runtime_json"
|
||||
}
|
||||
|
||||
server_port() {
|
||||
local runtime_json
|
||||
if [[ $# -gt 0 ]]; then
|
||||
runtime_json="$(runtime_json_for_pid "$1")" || return 1
|
||||
else
|
||||
runtime_json="$(latest_active_runtime_json)" || return 1
|
||||
fi
|
||||
runtime_json_to_port "$runtime_json"
|
||||
}
|
||||
|
||||
server_ready() {
|
||||
local pid="$1"
|
||||
local runtime_json api_url
|
||||
|
||||
runtime_json="$(runtime_json_for_pid "$pid")" || return 1
|
||||
if ! command -v curl >/dev/null 2>&1; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
api_url="$(runtime_json_to_api_url "$runtime_json")" || return 1
|
||||
curl --silent --show-error --max-time 2 "$api_url" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
find_project_jupyter_pids() {
|
||||
local process_table
|
||||
process_table="$(ps -axo pid=,command= 2>/dev/null || true)"
|
||||
[[ -n "$process_table" ]] || return 0
|
||||
|
||||
printf '%s\n' "$process_table" | awk -v root="$PROJECT_ROOT" 'index($0, "jupyter") && (index($0, "--ServerApp.root_dir=" root) || index($0, "--ServerApp.root_dir " root) || index($0, "--notebook-dir=" root "/notebooks") || index($0, "--notebook-dir " root "/notebooks")) { print $1 }'
|
||||
}
|
||||
|
||||
orphan_pids() {
|
||||
local tracked_pid="${1:-}"
|
||||
find_project_jupyter_pids | awk -v tracked="$tracked_pid" '$1 != tracked'
|
||||
}
|
||||
|
||||
find_free_port() {
|
||||
local requested_port="${1:-}"
|
||||
local port
|
||||
|
||||
if [[ -n "$requested_port" ]]; then
|
||||
if lsof -i :"$requested_port" >/dev/null 2>&1; then
|
||||
echo "Requested port $requested_port is already in use." >&2
|
||||
return 1
|
||||
fi
|
||||
printf '%s\n' "$requested_port"
|
||||
return 0
|
||||
fi
|
||||
|
||||
port=8888
|
||||
while lsof -i :"$port" >/dev/null 2>&1; do
|
||||
port=$((port + 1))
|
||||
if (( port > 8899 )); then
|
||||
echo "No free port in range 8888-8899." >&2
|
||||
return 1
|
||||
fi
|
||||
done
|
||||
|
||||
printf '%s\n' "$port"
|
||||
}
|
||||
|
||||
open_url() {
|
||||
local url="$1"
|
||||
|
||||
if command -v open >/dev/null 2>&1; then
|
||||
open "$url" >/dev/null 2>&1 || true
|
||||
elif command -v xdg-open >/dev/null 2>&1; then
|
||||
xdg-open "$url" >/dev/null 2>&1 || true
|
||||
fi
|
||||
}
|
||||
|
||||
require_bootstrap() {
|
||||
if [[ ! -x "$PYTHON" ]]; then
|
||||
echo "Missing .venv. Run: bash scripts/app.sh bootstrap" >&2
|
||||
exit 1
|
||||
fi
|
||||
if ! jupyter_installed; then
|
||||
echo "JupyterLab/ipykernel are not available. Run: bash scripts/app.sh bootstrap" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
cmd_bootstrap() {
|
||||
local py_cmd="${PYTHON_BIN:-}"
|
||||
if [[ -z "$py_cmd" ]]; then
|
||||
for candidate in python3.12 python3.11 python3; do
|
||||
if command -v "$candidate" >/dev/null 2>&1; then
|
||||
py_cmd="$candidate"
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
if [[ -z "$py_cmd" ]]; then
|
||||
echo "Python 3.11+ not found." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! -x "$PYTHON" ]]; then
|
||||
"$py_cmd" -m venv "$VENV_DIR"
|
||||
fi
|
||||
|
||||
ensure_dirs
|
||||
set_jupyter_env
|
||||
|
||||
"$PYTHON" -m pip install --disable-pip-version-check --no-build-isolation -e ".[dev]"
|
||||
|
||||
"$PYTHON" -m ipykernel install \
|
||||
--sys-prefix \
|
||||
--name "ntt-learning" \
|
||||
--display-name "NTT Learning" \
|
||||
--env IPYTHONDIR "$IPYTHONDIR" \
|
||||
--env MPLCONFIGDIR "$MPLCONFIGDIR"
|
||||
|
||||
echo "Bootstrap complete."
|
||||
}
|
||||
|
||||
cmd_start() {
|
||||
local open_browser=true
|
||||
local foreground=false
|
||||
local requested_port=""
|
||||
local port pid url runtime_url
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--no-open)
|
||||
open_browser=false
|
||||
shift
|
||||
;;
|
||||
--foreground)
|
||||
foreground=true
|
||||
shift
|
||||
;;
|
||||
--port)
|
||||
[[ $# -ge 2 ]] || { echo "--port requires a value." >&2; exit 1; }
|
||||
requested_port="$2"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
echo "Unknown start option: $1" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
require_bootstrap
|
||||
ensure_dirs
|
||||
set_jupyter_env
|
||||
cleanup_stale_runtime
|
||||
|
||||
if pid="$(get_running_pid 2>/dev/null)"; then
|
||||
echo "JupyterLab is already running with pid $pid."
|
||||
if url="$(server_access_url "$pid" 2>/dev/null)"; then
|
||||
echo "$url"
|
||||
if $open_browser; then
|
||||
open_url "$url"
|
||||
fi
|
||||
fi
|
||||
exit 0
|
||||
fi
|
||||
|
||||
port="$(find_free_port "$requested_port")"
|
||||
|
||||
if $foreground; then
|
||||
echo "Starting JupyterLab in foreground on port $port."
|
||||
echo "Jupyter will print the tokenized URL in this terminal."
|
||||
exec env \
|
||||
JUPYTER_CONFIG_DIR="$JUPYTER_CONFIG_DIR" \
|
||||
JUPYTER_DATA_DIR="$JUPYTER_DATA_DIR" \
|
||||
JUPYTER_RUNTIME_DIR="$JUPYTER_RUNTIME_DIR" \
|
||||
IPYTHONDIR="$IPYTHONDIR" \
|
||||
MPLCONFIGDIR="$MPLCONFIGDIR" \
|
||||
"$JUPYTER" lab \
|
||||
--no-browser \
|
||||
--ip=127.0.0.1 \
|
||||
--port="$port" \
|
||||
--ServerApp.root_dir="$PROJECT_ROOT"
|
||||
fi
|
||||
|
||||
: > "$LOG_FILE"
|
||||
nohup env \
|
||||
JUPYTER_CONFIG_DIR="$JUPYTER_CONFIG_DIR" \
|
||||
JUPYTER_DATA_DIR="$JUPYTER_DATA_DIR" \
|
||||
JUPYTER_RUNTIME_DIR="$JUPYTER_RUNTIME_DIR" \
|
||||
IPYTHONDIR="$IPYTHONDIR" \
|
||||
MPLCONFIGDIR="$MPLCONFIGDIR" \
|
||||
"$JUPYTER" lab \
|
||||
--no-browser \
|
||||
--ip=127.0.0.1 \
|
||||
--port="$port" \
|
||||
--ServerApp.root_dir="$PROJECT_ROOT" \
|
||||
>"$LOG_FILE" 2>&1 &
|
||||
|
||||
pid=$!
|
||||
write_pid_files "$pid"
|
||||
|
||||
url=""
|
||||
for _ in $(seq 1 40); do
|
||||
if ! pid_alive "$pid"; then
|
||||
echo "JupyterLab exited during startup. Recent log output:" >&2
|
||||
tail -n 20 "$LOG_FILE" >&2 || true
|
||||
clear_pid_files
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cleanup_stale_runtime
|
||||
if server_ready "$pid"; then
|
||||
url="$(server_access_url "$pid" 2>/dev/null || true)"
|
||||
break
|
||||
fi
|
||||
sleep 0.5
|
||||
done
|
||||
|
||||
if [[ -z "$url" ]]; then
|
||||
if runtime_url="$(server_access_url "$pid" 2>/dev/null)"; then
|
||||
url="$runtime_url"
|
||||
else
|
||||
echo "JupyterLab started with pid $pid, but the runtime URL was not detected. Check $LOG_FILE." >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "JupyterLab started."
|
||||
echo "pid=$pid"
|
||||
echo "port=$(server_port "$pid" 2>/dev/null || printf '%s' "$port")"
|
||||
echo "access_url=$url"
|
||||
echo "log_file=$LOG_FILE"
|
||||
|
||||
if $open_browser; then
|
||||
open_url "$url"
|
||||
fi
|
||||
}
|
||||
|
||||
cmd_stop() {
|
||||
local pid orphans
|
||||
|
||||
cleanup_stale_runtime
|
||||
|
||||
if pid="$(get_running_pid 2>/dev/null)"; then
|
||||
kill -TERM "$pid" 2>/dev/null || true
|
||||
for _ in $(seq 1 20); do
|
||||
if ! pid_alive "$pid"; then
|
||||
break
|
||||
fi
|
||||
sleep 0.5
|
||||
done
|
||||
if pid_alive "$pid"; then
|
||||
kill -KILL "$pid" 2>/dev/null || true
|
||||
fi
|
||||
clear_pid_files
|
||||
cleanup_stale_runtime
|
||||
echo "JupyterLab stopped."
|
||||
return 0
|
||||
fi
|
||||
|
||||
clear_pid_files
|
||||
orphans="$(orphan_pids | tr '\n' ' ' | xargs || true)"
|
||||
if [[ -n "$orphans" ]]; then
|
||||
echo "No managed PID file, but orphan Jupyter processes were detected: $orphans"
|
||||
else
|
||||
echo "JupyterLab is not running."
|
||||
fi
|
||||
}
|
||||
|
||||
cmd_restart() {
|
||||
cmd_stop
|
||||
sleep 1
|
||||
cmd_start "$@"
|
||||
}
|
||||
|
||||
cmd_status() {
|
||||
local running_pid="" access_url="" port="" orphans=""
|
||||
|
||||
ensure_dirs
|
||||
cleanup_stale_runtime
|
||||
|
||||
echo "repo_root=$PROJECT_ROOT"
|
||||
|
||||
if [[ -x "$PYTHON" ]]; then
|
||||
echo "venv=present"
|
||||
else
|
||||
echo "venv=missing"
|
||||
fi
|
||||
|
||||
if jupyter_installed; then
|
||||
echo "jupyterlab=installed"
|
||||
else
|
||||
echo "jupyterlab=missing"
|
||||
fi
|
||||
|
||||
if running_pid="$(get_running_pid 2>/dev/null)"; then
|
||||
echo "server=running"
|
||||
echo "pid=$running_pid"
|
||||
if port="$(server_port "$running_pid" 2>/dev/null)"; then
|
||||
echo "port=$port"
|
||||
fi
|
||||
if access_url="$(server_access_url "$running_pid" 2>/dev/null)"; then
|
||||
echo "access_url=$access_url"
|
||||
fi
|
||||
else
|
||||
echo "server=stopped"
|
||||
fi
|
||||
|
||||
orphans="$(orphan_pids "${running_pid:-}" | tr '\n' ' ' | xargs || true)"
|
||||
if [[ -n "$orphans" ]]; then
|
||||
echo "orphan_pids=$orphans"
|
||||
else
|
||||
echo "orphan_pids=none"
|
||||
fi
|
||||
|
||||
echo "pid_file=$PID_FILE"
|
||||
echo "log_file=$LOG_FILE"
|
||||
echo "notebooks_dir=$PROJECT_ROOT/notebooks"
|
||||
}
|
||||
|
||||
cmd_logs() {
|
||||
local follow=false
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
-f|--follow)
|
||||
follow=true
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
echo "Unknown logs option: $1" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
ensure_dirs
|
||||
if [[ ! -f "$LOG_FILE" ]]; then
|
||||
echo "No Jupyter log file found."
|
||||
return 0
|
||||
fi
|
||||
|
||||
if $follow; then
|
||||
exec tail -n 100 -f "$LOG_FILE"
|
||||
fi
|
||||
|
||||
tail -n 100 "$LOG_FILE"
|
||||
}
|
||||
|
||||
cmd_validate() {
|
||||
require_bootstrap
|
||||
cd "$PROJECT_ROOT"
|
||||
"$PYTHON" -m unittest discover -s tests -t .
|
||||
}
|
||||
|
||||
cmd_reset_state() {
|
||||
cmd_stop >/dev/null 2>&1 || true
|
||||
ensure_dirs
|
||||
clear_pid_files
|
||||
rm -f "$LOG_FILE"
|
||||
rm -f "$JUPYTER_RUNTIME_DIR"/jpserver-*.json "$JUPYTER_RUNTIME_DIR"/jpserver-*-open.html
|
||||
rm -rf "$JUPYTER_DATA_DIR/lab/workspaces"/*
|
||||
rm -rf "$MPLCONFIGDIR"/*
|
||||
find "$PROJECT_ROOT/notebooks" -type d -name .ipynb_checkpoints -prune -exec rm -rf {} +
|
||||
echo "Repo-local runtime state reset."
|
||||
}
|
||||
|
||||
main() {
|
||||
local command="${1:-}"
|
||||
if [[ -z "$command" ]]; then
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
shift || true
|
||||
|
||||
case "$command" in
|
||||
bootstrap) cmd_bootstrap "$@" ;;
|
||||
start) cmd_start "$@" ;;
|
||||
stop) cmd_stop "$@" ;;
|
||||
restart) cmd_restart "$@" ;;
|
||||
status) cmd_status "$@" ;;
|
||||
logs) cmd_logs "$@" ;;
|
||||
validate) cmd_validate "$@" ;;
|
||||
reset-state) cmd_reset_state "$@" ;;
|
||||
-h|--help|help) usage ;;
|
||||
*)
|
||||
echo "Unknown command: $command" >&2
|
||||
usage >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
main "$@"
|
||||
|
|
@ -1,17 +1,5 @@
|
|||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
PYTHON_BIN="${PYTHON_BIN:-python3}"
|
||||
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
if [[ ! -x ".venv/bin/python" ]]; then
|
||||
"$PYTHON_BIN" -m venv .venv
|
||||
fi
|
||||
|
||||
".venv/bin/python" -m pip install --upgrade pip
|
||||
".venv/bin/python" -m pip install -e ".[dev]"
|
||||
|
||||
echo "Bootstrap complete."
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
exec bash "$SCRIPT_DIR/app.sh" bootstrap "$@"
|
||||
|
|
|
|||
|
|
@ -3,11 +3,13 @@ set -euo pipefail
|
|||
|
||||
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
VENV_PY="$REPO_ROOT/.venv/bin/python"
|
||||
PID_FILE="$REPO_ROOT/.run/jupyter.pid"
|
||||
LOG_FILE="$REPO_ROOT/.run/jupyter.log"
|
||||
PID_FILE="$REPO_ROOT/.logs/jupyter.pid"
|
||||
LEGACY_PID_FILE="$REPO_ROOT/.run/jupyter.pid"
|
||||
LOG_FILE="$REPO_ROOT/.logs/jupyterlab.log"
|
||||
|
||||
ensure_runtime_dirs() {
|
||||
mkdir -p \
|
||||
"$REPO_ROOT/.logs" \
|
||||
"$REPO_ROOT/.run" \
|
||||
"$REPO_ROOT/.cache/matplotlib" \
|
||||
"$REPO_ROOT/.ipython" \
|
||||
|
|
@ -28,12 +30,14 @@ jupyter_installed() {
|
|||
}
|
||||
|
||||
is_running() {
|
||||
if [[ ! -f "$PID_FILE" ]]; then
|
||||
local pid
|
||||
if [[ -f "$PID_FILE" ]]; then
|
||||
pid="$(<"$PID_FILE")"
|
||||
elif [[ -f "$LEGACY_PID_FILE" ]]; then
|
||||
pid="$(<"$LEGACY_PID_FILE")"
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
|
||||
local pid
|
||||
pid="$(<"$PID_FILE")"
|
||||
[[ -n "$pid" ]] || return 1
|
||||
kill -0 "$pid" 2>/dev/null
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,19 +2,4 @@
|
|||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
if is_running; then
|
||||
"$SCRIPT_DIR/stop.sh"
|
||||
fi
|
||||
|
||||
ensure_runtime_dirs
|
||||
|
||||
rm -f "$PID_FILE" "$LOG_FILE"
|
||||
rm -rf "$REPO_ROOT/.jupyter_runtime"/*
|
||||
rm -rf "$REPO_ROOT/.jupyter_data/lab/workspaces"/*
|
||||
rm -rf "$REPO_ROOT/.cache/matplotlib"/*
|
||||
find "$REPO_ROOT/notebooks" -type d -name .ipynb_checkpoints -prune -exec rm -rf {} +
|
||||
|
||||
echo "Repo-local runtime state reset."
|
||||
|
||||
exec bash "$SCRIPT_DIR/app.sh" reset-state "$@"
|
||||
|
|
|
|||
|
|
@ -2,7 +2,4 @@
|
|||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
"$SCRIPT_DIR/stop.sh"
|
||||
"$SCRIPT_DIR/start.sh"
|
||||
|
||||
exec bash "$SCRIPT_DIR/app.sh" restart "$@"
|
||||
|
|
|
|||
|
|
@ -2,55 +2,4 @@
|
|||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
require_venv
|
||||
ensure_runtime_dirs
|
||||
|
||||
if ! jupyter_installed; then
|
||||
echo "jupyterlab is not installed in .venv. Run scripts/bootstrap.sh first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if is_running; then
|
||||
pid="$(<"$PID_FILE")"
|
||||
echo "JupyterLab is already running with pid $pid."
|
||||
if access_url="$(access_url_for_pid "$pid" 2>/dev/null)"; then
|
||||
echo "$access_url"
|
||||
fi
|
||||
exit 0
|
||||
fi
|
||||
|
||||
PORT="${PORT:-8888}"
|
||||
|
||||
nohup env \
|
||||
JUPYTER_CONFIG_DIR="$REPO_ROOT/.jupyter_config" \
|
||||
JUPYTER_DATA_DIR="$REPO_ROOT/.jupyter_data" \
|
||||
JUPYTER_RUNTIME_DIR="$REPO_ROOT/.jupyter_runtime" \
|
||||
IPYTHONDIR="$REPO_ROOT/.ipython" \
|
||||
MPLCONFIGDIR="$REPO_ROOT/.cache/matplotlib" \
|
||||
"$VENV_PY" -m jupyter lab \
|
||||
--no-browser \
|
||||
--ip=127.0.0.1 \
|
||||
--port="$PORT" \
|
||||
--ServerApp.root_dir="$REPO_ROOT" \
|
||||
>"$LOG_FILE" 2>&1 &
|
||||
|
||||
echo $! > "$PID_FILE"
|
||||
pid="$(<"$PID_FILE")"
|
||||
|
||||
for _ in $(seq 1 20); do
|
||||
if ! is_running; then
|
||||
echo "JupyterLab failed to start. Check $LOG_FILE."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if access_url="$(access_url_for_pid "$pid" 2>/dev/null)"; then
|
||||
echo "JupyterLab started at $access_url"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
sleep 0.5
|
||||
done
|
||||
|
||||
echo "JupyterLab started with pid $pid, but the runtime URL was not detected yet. Check $LOG_FILE."
|
||||
exec bash "$SCRIPT_DIR/app.sh" start "$@"
|
||||
|
|
|
|||
|
|
@ -2,33 +2,4 @@
|
|||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
ensure_runtime_dirs
|
||||
|
||||
echo "repo_root=$REPO_ROOT"
|
||||
|
||||
if [[ -x "$VENV_PY" ]]; then
|
||||
echo "venv=present"
|
||||
else
|
||||
echo "venv=missing"
|
||||
fi
|
||||
|
||||
if [[ -x "$VENV_PY" ]] && jupyter_installed; then
|
||||
echo "jupyterlab=installed"
|
||||
else
|
||||
echo "jupyterlab=missing"
|
||||
fi
|
||||
|
||||
if is_running; then
|
||||
pid="$(<"$PID_FILE")"
|
||||
echo "server=running"
|
||||
echo "pid=$pid"
|
||||
if access_url="$(access_url_for_pid "$pid" 2>/dev/null)"; then
|
||||
echo "access_url=$access_url"
|
||||
fi
|
||||
else
|
||||
echo "server=stopped"
|
||||
fi
|
||||
|
||||
echo "notebooks_dir=$REPO_ROOT/notebooks"
|
||||
exec bash "$SCRIPT_DIR/app.sh" status "$@"
|
||||
|
|
|
|||
|
|
@ -2,20 +2,4 @@
|
|||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
if ! [[ -f "$PID_FILE" ]]; then
|
||||
echo "No JupyterLab pid file found."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if is_running; then
|
||||
kill -TERM "$(<"$PID_FILE")"
|
||||
rm -f "$PID_FILE"
|
||||
echo "JupyterLab stopped."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
rm -f "$PID_FILE"
|
||||
echo "Stale pid file removed."
|
||||
|
||||
exec bash "$SCRIPT_DIR/app.sh" stop "$@"
|
||||
|
|
|
|||
|
|
@ -2,10 +2,4 @@
|
|||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
require_venv
|
||||
|
||||
cd "$REPO_ROOT"
|
||||
"$VENV_PY" -m unittest discover -s tests -t .
|
||||
|
||||
exec bash "$SCRIPT_DIR/app.sh" validate "$@"
|
||||
|
|
|
|||
|
|
@ -15,16 +15,21 @@ class RepoOpsTests(unittest.TestCase):
|
|||
mode = script_path.stat().st_mode
|
||||
self.assertTrue(mode & stat.S_IXUSR, f"{script_name} is not executable")
|
||||
|
||||
def test_status_script_reports_repo_state(self) -> None:
|
||||
completed = subprocess.run(
|
||||
def test_status_commands_report_repo_state(self) -> None:
|
||||
for command in (
|
||||
["bash", "scripts/app.sh", "status"],
|
||||
["bash", "scripts/status.sh"],
|
||||
cwd=REPO_ROOT,
|
||||
check=True,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
self.assertIn("repo_root=", completed.stdout)
|
||||
self.assertIn("notebooks_dir=", completed.stdout)
|
||||
):
|
||||
with self.subTest(command=" ".join(command)):
|
||||
completed = subprocess.run(
|
||||
command,
|
||||
cwd=REPO_ROOT,
|
||||
check=True,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
self.assertIn("repo_root=", completed.stdout)
|
||||
self.assertIn("notebooks_dir=", completed.stdout)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
|
|
|||
|
|
@ -1306,6 +1306,22 @@ def build_bundle_03() -> None:
|
|||
The point is to compute the same transform by reusing shared bracket terms instead of recomputing everything from scratch.
|
||||
""",
|
||||
),
|
||||
code(
|
||||
"mandatory",
|
||||
3,
|
||||
"demo",
|
||||
"See The Schedule Geometry Before The Numbers",
|
||||
"""
|
||||
from IPython.display import display
|
||||
|
||||
from ntt_learning.visuals import plot_stage_pairing_map, plot_stage_schedule
|
||||
|
||||
display(plot_stage_schedule(8, title="CT schedule skeleton for n=8"))
|
||||
display(plot_stage_pairing_map(8, 2, title="Stage 1 pairings for n=8"))
|
||||
display(plot_stage_pairing_map(8, 4, title="Stage 2 pairings for n=8"))
|
||||
display(plot_stage_pairing_map(8, 8, title="Stage 3 pairings for n=8"))
|
||||
""",
|
||||
),
|
||||
code(
|
||||
"mandatory",
|
||||
3,
|
||||
|
|
@ -2246,6 +2262,22 @@ def build_bundle_05() -> None:
|
|||
So the clean full-length `ψ` story from the toy negative-wrapped transform does not lift over unchanged.
|
||||
""",
|
||||
),
|
||||
code(
|
||||
"mandatory",
|
||||
3,
|
||||
"demo",
|
||||
"See The Stage Skeleton For n=16 And n=256",
|
||||
"""
|
||||
from IPython.display import display
|
||||
|
||||
from ntt_learning.visuals import plot_stage_pairing_map, plot_stage_schedule
|
||||
|
||||
display(plot_stage_schedule(16, title="Readable schedule skeleton for n=16"))
|
||||
display(plot_stage_pairing_map(16, 2, title="Stage 1 geometry for n=16"))
|
||||
display(plot_stage_pairing_map(16, 16, title="Final stage geometry for n=16"))
|
||||
display(plot_stage_schedule(256, title="Kyber-scale stage skeleton for n=256"))
|
||||
""",
|
||||
),
|
||||
code(
|
||||
"mandatory",
|
||||
3,
|
||||
|
|
|
|||
Loading…
Reference in a new issue