CLI === Entry Point ----------- This page describes the main command-line entry points for data collection: ``run.sh`` (training data) and ``run_evals.sh`` (evaluation data). .. _run-sh: ``run.sh`` ---------------------------------- `[Source] `_ Runs the full training data collection pipeline. It works sequentially in batches. For each batch, it generates compose configs, starts Minecraft instances, collects episodes, and postprocesses them. When all batches are ready, it transforms, validates, and combines all of them into the final training dataset. Usage ~~~~~ .. code-block:: bash ./run.sh [OPTIONS] Options ~~~~~~~ .. list-table:: :header-rows: 1 :widths: 25 15 60 * - Option - Default - Description * - ``--output-dir DIR`` - ``output2`` - Base data directory for outputs * - ``--num-batches N`` - ``2`` - Number of batches to run * - ``--num-flat-world N`` - ``1`` - Number of flat worlds per batch * - ``--num-normal-world N`` - ``1`` - Number of normal worlds per batch * - ``--num-episodes N`` - ``2`` - Number of episodes per batch * - ``--dataset-name NAME`` - ``duet`` - Name of the output dataset under ``/datasets/`` * - ``--filter-water-episodes true|false`` - ``true`` - Whether to detect and exclude water episodes from the train dataset (see :ref:`Water filtering `) * - ``-h``, ``--help`` - - Show usage and exit .. _run-sh-water-filtering: Water filtering ~~~~~~~~~~~~~~~ When ``--filter-water-episodes true`` (the default), the script: 1. After all batches complete, runs :ref:`detect_water_episodes_batch.py ` to detect episodes where either Alpha or Bravo shows underwater indicators (oxygen bar HUD). Results are written to ``/data_collection/train/water_episodes.json``. 2. After ``prepare_train_dataset.py`` runs, runs :ref:`filter_dataset.py ` to move detected water episodes into a ``/datasets//discarded/`` subdirectory, excluding them from the train dataset. Use ``--filter-water-episodes false`` to skip detection and filtering and keep all episodes. .. _run-sh-output-layout: Output layout ~~~~~~~~~~~~~ Data is written under ``/``. The following tree shows the directory structure: .. code-block:: text / ├── data_collection/ │ └── train/ │ ├── batch_/ │ │ ├── aligned/ │ │ │ ├── __Alpha_instance__camera.mp4 │ │ │ ├── __Alpha_instance__camera_meta.json │ │ │ ├── __Bravo_instance__camera.mp4 │ │ │ ├── __Bravo_instance__camera_meta.json │ │ │ └── ... (other episodes) │ │ ├── camera/ │ │ │ ├── data_alpha/ │ │ │ │ └── / │ │ │ ├── data_bravo/ │ │ │ │ └── / │ │ │ ├── output_alpha/ │ │ │ │ └── / │ │ │ │ ├── camera_alpha.mkv │ │ │ │ └── camera_alpha_meta.json │ │ │ └── output_bravo/ │ │ │ └── / │ │ ├── compose_configs/ │ │ │ ├── docker-compose-.yml │ │ │ └── ... (one per instance) │ │ ├── data/ │ │ │ └── / │ │ ├── logs/ │ │ │ └── docker-compose-/ │ │ │ ├── controller_alpha_instance_.log │ │ │ ├── controller_bravo_instance_.log │ │ │ ├── camera_alpha_instance_.log │ │ │ ├── camera_bravo_instance_.log │ │ │ ├── act_recorder_alpha_instance_.log │ │ │ ├── act_recorder_bravo_instance_.log │ │ │ ├── mc_instance_.log │ │ │ └── ... │ │ └── output/ │ │ ├── __Alpha_instance_.json │ │ ├── __Alpha_instance__meta.json │ │ ├── __Alpha_instance__episode_info.json │ │ ├── __Bravo_instance_.json │ │ ├── __Bravo_instance__meta.json │ │ ├── __Bravo_instance__episode_info.json │ │ └── ... (other episodes) │ └── water_episodes.json └── datasets/ └── / ├── train/ │ ├── batch___Alpha_instance_.mp4 │ ├── batch___Alpha_instance_.json │ ├── batch___Alpha_instance__episode_info.json │ ├── batch___Bravo_instance_.mp4 │ ├── batch___Bravo_instance_.json │ ├── batch___Bravo_instance__episode_info.json │ └── ... (other episodes) ├── test/ │ └── (same file patterns as train/) ├── annotated/ │ └── batch___instance_.mp4_combined.mp4 └── discarded/ └── (water episodes moved here) .. list-table:: :header-rows: 1 :widths: 45 55 * - Path (relative to ````) - Description * - ``data_collection/train/batch_/aligned/`` - Per-episode aligned camera videos and metadata: ``__Alpha_instance__camera.mp4``, ``__Alpha_instance__camera_meta.json``. Produced by :ref:`process_recordings.py `. * - ``data_collection/train/batch_/camera/data_alpha//``, ``data_bravo//`` - Minecraft client home directories for camera bots. Contains ``.minecraft/`` and client runtime data. * - ``data_collection/train/batch_/camera/output_alpha//``, ``output_bravo//`` - Raw camera recordings and metadata: ``camera_alpha.mkv``, ``camera_alpha_meta.json``. Input to :ref:`process_recordings.py `. * - ``data_collection/train/batch_/compose_configs/docker-compose-.yml`` - Generated Docker Compose file. Produced by :ref:`generate_compose.py `. * - ``data_collection/train/batch_/data//`` - Minecraft server data (world, config, logs). * - ``data_collection/train/batch_/logs/docker-compose-/`` - Docker logs for every service: controller, camera, act_recorder, mc, spectator, etc. * - ``data_collection/train/batch_/output/`` - Controller actions (see :ref:`Action file format `): ``__Alpha_instance_.json``, this file is copied to the final dataset folder as is. Episode metadata: ``__Alpha_instance__meta.json``, and episode info (see :ref:`Episode info file format `): ``__Alpha_instance__episode_info.json``, this file is copied to the final dataset folder as is. * - ``data_collection/train/water_episodes.json`` - (When water filtering enabled) List of detected water episodes for :ref:`filter_dataset.py `. * - ``datasets//train/``, ``test/`` - Prepared splits: ``batch___Alpha_instance_.mp4``, action JSON (see :ref:`Action file format `), episode info JSON (see :ref:`Episode info file format `). Produced by :ref:`prepare_train_dataset.py ` and :ref:`split_train_test.py `. * - ``datasets//annotated/`` - Debug videos with overlaid actions (Alpha+Bravo concatenated). Produced by :ref:`annotate_video_batch.py `. * - ``datasets//discarded/`` - Episodes excluded by water filtering (when enabled). .. _action-file-format: Action file format ~~~~~~~~~~~~~~~~~~ Action files (``*_Alpha_instance_*.json``, ``*_Bravo_instance_*.json``) contain a JSON array of per-frame objects, one per timestep: .. code-block:: json [ { "x": -4425.5, "y": 64, "z": 2298.5, "yaw": 4.636, "pitch": 0, "action": { "forward": false, "back": false, "left": false, "right": false, "jump": false, "sprint": false, "sneak": false, "camera": [0, 0], "attack": false, "use": false, "mount": false, "dismount": false, "place_block": false, "place_entity": false, "mine": false, "hotbar.1": false, "hotbar.2": false, "hotbar.3": false, "hotbar.4": false, "hotbar.5": false, "hotbar.6": false, "hotbar.7": false, "hotbar.8": false, "hotbar.9": false }, "inventory": { "quickBarSlot": 4, "mainInventory": [], "heldItem": null, "offHandItem": null, "hotbar": [ { "type": 1, "count": 64, "name": "stone", "displayName": "Stone", "slot": 36 }, { "type": 840, "count": 1, "name": "diamond_pickaxe", "displayName": "Diamond Pickaxe", "slot": 37 } ] }, "renderTime": 1770812761734, "physicsTime": 276613.613413, "relativeTimeMs": 0.0, "epochTime": 1770812761.734, "worldAgeTicks": 5649, "isInWater": false, "frame_count": 0, "episode_id": 0, "extra_info": { "seed": 42 } } ] The array contains one such object per frame. .. _episode-info-file-format: Episode info file format ~~~~~~~~~~~~~~~~~~~~~~~~ Episode info files (``*_episode_info.json``) contain a single JSON object: .. code-block:: json { "timestamp": "2026-02-11T12:25:11.506Z", "bot_name": "Alpha", "world_type": "normal", "episode_number": 0, "episode_type": "mine", "instance_id": 1, "encountered_error": false, "peer_encountered_error": false, "bot_died": false, "episode_recording_started": true, "eval_metadata": {} } .. _run-evals-sh: ``run_evals.sh`` ----------------------------------------- `[Source] `_ Runs evaluation data collection for several episode types. It works sequentially per episode type, where an episode type acts as a batch. Just as in :ref:`run.sh `, for each batch, it generates compose configs, starts Minecraft instances, collects episodes, and postprocesses them. When all batches are ready, it prepares each batch (episode type)into a separate evaluation dataset. Usage ~~~~~ .. code-block:: bash ./run_evals.sh [OPTIONS] Options ~~~~~~~ .. list-table:: :header-rows: 1 :widths: 25 15 60 * - Option - Default - Description * - ``--output-dir DIR`` - ``output2`` - Base data directory for outputs * - ``-h``, ``--help`` - - Show usage and exit Environment ~~~~~~~~~~~ - ``EVAL_TIME_SET_DAY`` — If set (e.g. ``1``), episode start time is set to day for all eval episodes. Default: ``1``. Eval episode types ~~~~~~~~~~~~~~~~~~ The script runs one batch per eval type: - ``rotationEval`` - ``translationEval`` - ``structureEval`` - ``turnToLookEval`` - ``turnToLookOppositeEval`` - ``bothLookAwayEval`` - ``oneLooksAwayEval`` For ``turnToLookEval`` and ``turnToLookOppositeEval`` the script uses 1 normal world and 32 episodes; for the rest, it uses 2 flatland worlds and 16 episodes per type. Output layout ~~~~~~~~~~~~~ Data is written under ``/``. The following tree shows the directory structure: .. code-block:: text / ├── data_collection/ │ └── eval/ │ ├── rotationEval/ │ │ └── ... │ ├── translationEval/ │ │ └── ... │ ├── structureEval/ │ │ └── ... │ └── ... (other eval types) └── datasets/ └── eval/ ├── rotationEval/ │ └── test/ │ └── ... ├── translationEval/ │ └── test/ │ └── ... ├── structureEval/ │ └── test/ │ └── ... └── ... (other eval types) .. list-table:: :header-rows: 1 :widths: 45 55 * - Path (relative to ````) - Description * - ``data_collection/eval//`` - Per-type batch directory. Structure under each ```` (e.g. ``rotationEval``) mirrors ``data_collection/train/batch_/`` in :ref:`run.sh output layout `: ``aligned/``, ``camera/``, ``compose_configs/``, ``data/``, ``logs/``, ``output/``. * - ``datasets/eval//`` - Prepared eval datasets per type. Each ```` has only a ``test/`` subdir; structure and file patterns mirror ``datasets//test/`` in :ref:`run.sh output layout `. Produced by :ref:`prepare_eval_datasets.py `. Postprocessing -------------- This section covers the postprocessing utilities that run as part of :ref:`run.sh ` and :ref:`run_evals.sh `, and turn raw camera recordings and action files into final train/eval datasets and optional annotated videos. .. _process-recordings-py: ``process_recordings.py`` ------------------------- `[Source] `_ Aligns raw Minecraft camera recordings with Mineflayer action episode files for a single bot, producing per-episode aligned camera videos. It can operate on a full directory of episodes or a single episode file. Usage ~~~~~ .. code-block:: bash python postprocess/process_recordings.py --actions-dir ACTIONS_DIR --camera-prefix CAMERA_DIR --bot {Alpha,Bravo} [OPTIONS] Options ~~~~~~~ .. list-table:: :header-rows: 1 :widths: 30 15 55 * - Option - Default - Description * - ``--actions-dir PATH`` - (required) - Directory containing Mineflayer action files (``*.json``). * - ``--camera-prefix PATH`` - (required) - Directory containing camera outputs (expects ``output_alpha/`` or ``output_bravo/``). * - ``--bot {Alpha,Bravo}`` - (required) - Which bot to process. * - ``--output-dir PATH`` - ``./aligned/`` - Base directory for aligned videos and metadata. * - ``--episode-file PATH`` - ``None`` - Process a single episode JSON instead of scanning ``--actions-dir``. Output layout ~~~~~~~~~~~~~ - ``/_camera.mp4`` — aligned camera video per episode. - ``/_camera_meta.json`` — alignment metadata and diagnostics (including per‑frame mappings and quality stats). .. _detect-water-episodes-batch-py: ``detect_water_episodes_batch.py`` ----------------------------------- `[Source] `_ Batch detects water episodes by detecting the oxygen bar HUD element in video frames. Uses multiprocessing to process all video files in all batch directories under a base path. Episodes are processed as Alpha‑Bravo pairs: if either video shows underwater indicators (oxygen bar), the entire pair is classified as a water episode. Outputs per‑batch JSON files and an aggregated water episodes list suitable for :ref:`filter_dataset.py `. Usage ~~~~~ .. code-block:: bash python postprocess/detect_water_episodes_batch.py --base-path BASE_PATH [OPTIONS] Options ~~~~~~~ .. list-table:: :header-rows: 1 :widths: 30 15 55 * - Option - Default - Description * - ``--base-path PATH`` - (required) - Base path containing batch folders (each with an ``aligned/`` subdirectory). * - ``--threshold FLOAT`` - ``0.8`` - Cosine similarity threshold for water detection (oxygen bar match). * - ``--max-frames N`` - ``1000`` - Maximum number of frames to sample per video. * - ``--num-workers N`` - CPU count - Number of worker processes. * - ``--top-percentile N`` - ``10`` - Top N percentile for similarity thresholding (e.g. 10 = top 10%). * - ``--out-path PATH`` - ``/all_water_episodes.json`` - Output path for aggregated water episodes JSON (list of ``batch_id``, ``episode_id``, ``instance_id``). Output layout ~~~~~~~~~~~~~ - Per batch: ``/water_episodes.json`` — categorized water/non‑water episodes and metadata. - Aggregated: ```` — flat list of water episode keys for use by :ref:`filter_dataset.py `. .. _filter-dataset-py: ``filter_dataset.py`` --------------------- `[Source] `_ Filters a multiplayer dataset by moving selected episodes into a ``discarded/`` subdirectory. Expects a JSON list of episode keys (typically from :ref:`detect_water_episodes_batch.py `) with ``batch_id``, ``episode_id``, and ``instance_id``. File names in the dataset directory must encode these fields, e.g. ``batch_0_000058_Bravo_instance_001_episode_info.json``. Usage ~~~~~ .. code-block:: bash python postprocess/filter_dataset.py --episodes-json PATH dataset_dir Options ~~~~~~~ .. list-table:: :header-rows: 1 :widths: 30 15 55 * - Option - Default - Description * - ``--episodes-json PATH`` - (required) - Path to JSON file with list of objects containing ``batch_id``, ``episode_id``, and ``instance_id``. Matching episodes are moved to ``discarded/``. * - ``dataset_dir`` - (required) - Dataset directory containing files such as ``batch_0_000000_Alpha_instance_000.*``. Matching files are moved into a ``discarded/`` subdirectory. Output layout ~~~~~~~~~~~~~ Matching files are moved from ``/`` into ``/discarded/``. Only top‑level files are considered; subdirectories are ignored. .. _prepare-train-dataset-py: ``prepare_train_dataset.py`` ----------------------------- `[Source] `_ Validates and consolidates aligned camera videos and action JSONs into a single final dataset directory for training. It filters for episodes where both Alpha and Bravo have consistent video/action pairs and enforces one‑to‑one alignment between frames and actions. Usage ~~~~~ .. code-block:: bash python postprocess/prepare_train_dataset.py --source-dir SOURCE_DIR --destination-dir DEST_DIR [OPTIONS] Options ~~~~~~~ .. list-table:: :header-rows: 1 :widths: 30 15 55 * - Option - Default - Description * - ``--source-dir PATH`` - (required) - Directory that contains one or more episode runs with ``aligned/`` and ``output/`` subdirs. * - ``--instance-ids IDS`` - ``None`` - Optional list of instance IDs to include (space‑ or comma‑separated). * - ``--file-prefix STR`` - ``""`` - Optional prefix prepended to destination filenames. * - ``--destination-dir PATH`` - (required) - Target directory where the consolidated final dataset is written. * - ``--bot1-name NAME`` - ``Alpha`` - Name used to identify the first bot in file paths. * - ``--bot2-name NAME`` - ``Bravo`` - Name used to identify the second bot in file paths. Output layout ~~~~~~~~~~~~~ All validated per‑episode files (videos and JSONs for both players) are copied into ``/`` with timestamp prefixes stripped and an optional prefix applied. Only episodes where frame counts match action counts for both bots are included. .. _split-train-test-py: ``split_train_test.py`` ----------------------- `[Source] `_ Splits a consolidated final dataset directory into train/test splits. Usage ~~~~~ .. code-block:: bash python postprocess/split_train_test.py FINAL_DIR [OPTIONS] Options ~~~~~~~ .. list-table:: :header-rows: 1 :widths: 30 15 55 * - Option - Default - Description * - ``final_dir`` - (required) - Path to final dataset produced by ``prepare_train_dataset.py``. * - ``--test-percent PCT`` - ``1.0`` - Percentage of (episode, instance) keys assigned to the test split. * - ``--seed N`` - ``42`` - Random seed for deterministic splitting. * - ``--dry-run`` - ``False`` - Print the planned split without moving any files. Output layout ~~~~~~~~~~~~~ Creates ``/train/`` and ``/test/`` and moves entire episode‑instance groups into one of the two subdirectories. .. _prepare-eval-datasets-py: ``prepare_eval_datasets.py`` ----------------------------- `[Source] `_ Prepares evaluation datasets from aligned eval episodes, mirroring the structure expected by Solaris model code. It filters, validates, and reshapes eval episodes into a unified directory layout. Usage ~~~~~ .. code-block:: bash python postprocess/prepare_eval_datasets.py --source-dir SOURCE_DIR --destination-dir DEST_DIR [OPTIONS] Options ~~~~~~~ .. list-table:: :header-rows: 1 :widths: 30 15 55 * - Option - Default - Description * - ``--source-dir PATH`` - (required) - Directory with eval runs (aligned videos and action JSONs). * - ``--destination-dir PATH`` - (required) - Output directory for prepared eval datasets. .. _annotate-video-batch-py: ``annotate_video_batch.py`` --------------------------- `[Source] `_ Batch annotates aligned camera videos with overlaid action information and vertically concatenates Alpha/Bravo views into a single debug video per episode. It can run on a single videos directory or over multiple subdirectories and supports parallel processing. Usage ~~~~~ .. code-block:: bash python postprocess/annotate_video_batch.py VIDEOS_DIR [OPTIONS] Options ~~~~~~~ .. list-table:: :header-rows: 1 :widths: 30 15 55 * - Option - Default - Description * - ``videos_dir`` - (required) - Directory that has ``aligned/`` and ``output/`` (or already split ``test/``) subdirectories, or a parent directory with multiple such subdirectories. * - ``--workers N`` - ``8`` - Number of parallel workers. * - ``--output-dir PATH`` - ``/annotated`` - Where to write annotated videos (per directory or per subdirectory). * - ``--limit N`` - ``10`` - Maximum number of video pairs to process (stratified across instances). Output layout ~~~~~~~~~~~~~ Writes combined debug videos (Alpha on top, Bravo on bottom) under an ``annotated/`` subdirectory (or the directory specified via ``--output-dir``). Docker Orchestration -------------------- This section documents the Python tools that generate and orchestrate the Docker Compose instances used by :ref:`run.sh `/:ref:`run_evals.sh `. .. _generate-compose-py: ``generate_compose.py`` ----------------------- `[Source] `_ Generates multiple Docker Compose files, each describing a full ``SolarisEngine``(Minecraft server, controller bots, camera bots, spectators, and helper containers) for a single instance. It supports mixed flat/normal worlds, GPU‑backed camera rendering, and a wide range of episode customization options. Usage ~~~~~ .. code-block:: bash python generate_compose.py --data-dir DATA_DIR --output_dir OUTPUT_DIR \ --camera_output_alpha_base ALPHA_OUT --camera_output_bravo_base BRAVO_OUT [OPTIONS] Options ~~~~~~~ .. list-table:: :header-rows: 1 :widths: 30 15 55 * - Option - Default - Description * - ``--instances N`` - ``15`` - Number of instances to generate when world counts are not overridden. * - ``--num_flatland_world N`` - ``0`` - Number of flat‑world instances (overrides ``--instances`` when > 0). * - ``--num_normal_world N`` - ``0`` - Number of normal‑world instances (overrides ``--instances`` when > 0). * - ``--compose_dir DIR`` - ``compose_configs`` - Directory to store generated ``docker-compose-XXX.yml`` files. * - ``--base_port PORT`` - ``25565`` - Base Minecraft server port (one per instance, contiguous range). * - ``--base_rcon_port PORT`` - ``25675`` - Base RCON port (one per instance, contiguous range). * - ``--act_recorder_port PORT`` - ``8090`` - Act recorder port used inside the bridge network. * - ``--coord_port PORT`` - ``8100`` - Coordination port used by controller bots. * - ``--data_dir DIR`` - (required) - Base directory for per‑instance Minecraft server data. * - ``--output_dir DIR`` - (required) - Shared output directory for controller/act_recorder episode outputs. * - ``--camera_output_alpha_base DIR`` - (required) - Absolute base directory for per‑instance Camera Alpha outputs. * - ``--camera_output_bravo_base DIR`` - (required) - Absolute base directory for per‑instance Camera Bravo outputs. * - ``--camera_data_alpha_base DIR`` - project default - Base directory for Camera Alpha home/data (defaults under ``camera/data_alpha``). * - ``--camera_data_bravo_base DIR`` - project default - Base directory for Camera Bravo home/data (defaults under ``camera/data_bravo``). * - ``--num_episodes N`` - ``5`` - Number of episodes to run per instance. * - ``--episode_start_id N`` - ``0`` - Starting episode ID. * - ``--bootstrap_wait_time SEC`` - ``60`` - Time for servers/plugins to bootstrap before controllers start. * - ``--episode_category STR`` - ``look`` - High‑level episode category name. * - ``--episode_types STR`` - ``all`` - Comma‑separated episode types for controllers. * - ``--viewer_rendering_disabled {0,1}`` - ``1`` - Disable viewer rendering for controller/act_recorder (recommended for speed). * - ``--smoke_test {0,1}`` - ``0`` - If set, enable smoke‑test mode that exercises all episode types. * - ``--eval_time_set_day {0,1}`` - ``0`` - If set, force eval episodes to start at day time. * - ``--flatland_world_disable_structures {0,1}`` - ``0`` - Disable structure generation in flat worlds. * - ``--render_distance N`` - ``8`` - Minecraft render distance (chunks) for camera clients. * - ``--simulation_distance N`` - ``4`` - Minecraft simulation distance (chunks). * - ``--graphics_mode {0,1,2}`` - ``1`` - Minecraft graphics mode (Fast, Fancy, Fabulous). * - ``--gpu_mode {egl,x11,auto}`` - ``egl`` - GPU rendering mode for camera containers. Output layout ~~~~~~~~~~~~~ - ``/docker-compose-XXX.yml`` — one Docker Compose file per instance. - Per‑instance data and camera output/data directories are created under ``--data_dir`` and the camera base paths; ``--output_dir`` is created as the shared controller output root. .. _orchestrate-py: ``orchestrate.py`` ------------------ `[Source] `_ Orchestrates one or more generated Docker Compose instances produced by :ref:`generate_compose.py `. The script starts/stops them, captures logs, inspects status, and optionally runs postprocessing over their outputs. Usage ~~~~~ .. code-block:: bash python orchestrate.py {start,stop,status,logs,postprocess} [OPTIONS] Options ~~~~~~~ .. list-table:: :header-rows: 1 :widths: 30 15 55 * - Option - Default - Description * - ``command`` - (required) - One of ``start``, ``stop``, ``status``, ``logs``, or ``postprocess``. * - ``--compose-dir DIR`` - ``compose_configs`` - Directory containing the ``docker-compose-XXX.yml`` files. * - ``--build`` - ``False`` - If set with ``start``, build images before starting (otherwise they are pulled). * - ``--logs-dir DIR`` - ``logs`` - Directory where per‑service logs are stored. * - ``--instance PATTERN`` - ``None`` - Filter for a subset of instances when showing logs. * - ``--follow``, ``-f`` - ``False`` - Follow logs (for the ``logs`` command when targeting a single instance). * - ``--tail N`` - ``50`` - Number of lines to show from saved logs. * - ``--workers N`` - ``4`` - Parallel workers for the ``postprocess`` command. * - ``--comparison-video`` - ``False`` - When running ``postprocess``, also build side‑by‑side comparison videos. * - ``--debug`` - ``False`` - Extra logging for ``postprocess`` episode discovery. * - ``--output-dir PATH`` - ``None`` - Base ``aligned`` directory to postprocess (see ``postprocess_recordings`` docstring). Behavior summary ~~~~~~~~~~~~~~~~ - ``start`` — starts all compose instances in parallel, waits for controller completion, captures per‑service logs, then shuts everything down. - ``stop`` — stops all running instances and tears down volumes. - ``status`` — reports how many instances have running containers. - ``logs`` — tails saved per‑service logs (or falls back to ``docker compose logs`` if none are saved yet). - ``postprocess`` — runs camera alignment over all episodes under a given ``--output-dir`` using the postprocessing pipeline.