Technical reference for the Pitgun framework: canonical metric definitions, public schemas, event envelopes, processing manifests, and runtime telemetry services.
Start from canonical definitions, validate event boundaries, then run the same manifests against real-time streams or historical data.
Public contracts and Rust services powering Pitgun telemetry. The dictionary defines meaning, schemas define boundaries, and manifests define reproducible processing.
The runtime model: events, contracts, telemetry, and processing boundaries.
Canonical metric names, units, types, and
dimensions exposed through api.pitgun.io.
Versioned JSON Schemas available at
/schemas/<schema>/<version>.
HTTP/WebSocket ingress for
pitgun-envelope-v1 events and Prometheus metrics.
Formula evaluation, derived metrics, filters, aggregations, and analysis outputs.
Shared Rust types and public schema boundaries for producers and processors.
Pitgun Game as a real telemetry workload for ingestion, processing, and observability.
Docker Compose, service configuration, and operational boundaries.
CLI for replaying historical data into the gateway for post-mortem testing.
Experimental solver, authority layer, archival sinks, and visual manifest tooling.
Pitgun began as an F1 telemetry framework and is now a manifest-driven telemetry processing framework. The game remains a reference workload, but the framework is designed for any producer that can emit versioned events, high-frequency samples, or historical datasets.
pitgun-envelope-v1 events over
HTTP or WebSocket.Canonical metric definitions exposed through
api.pitgun.io.
Public JSON Schemas for envelopes, metric dictionaries, manifests, bundles, and insights.
Manifest-driven transformations for derived metrics, summaries, and analysis payloads.
Start from the dictionary, inspect public schemas, then send a versioned event envelope to the Gateway.
The dictionary is the semantic layer of the framework. It defines what a telemetry channel means before any service stores, charts, or analyzes it: canonical name, unit, type, dimension, expected range, and domain context.
api.pitgun.io exposes the canonical dictionary for producers, processors, docs,
and downstream analysis tools. The API is intentionally separate from the game domain.
Real-time ingestion and post-mortem analysis use the same semantic base. A WebSocket event, a stored QuestDB row, and a replayed dataset should all resolve to the same canonical metric definitions before processing starts.
The pitgun-gateway service is the framework's ingress layer: an Axum-based HTTP server
that accepts runtime event envelopes, queues accepted events, writes telemetry to configured sinks, and
exposes Prometheus metrics for operational visibility.
Deploy the Gateway without other framework
components as a contract-aware ingestion endpoint. Connect any HTTP or WebSocket producer that
emits pitgun-envelope-v1.
PITGUN_GATEWAY_BINDPITGUN_GATEWAY_DATA_DIRPITGUN_GATEWAY_ALLOW_NON_LOOPBACKPITGUN_GATEWAY_RUN_REGISTRY_URLpitgun-envelope-v1 (same schema as beacon)
By default, the Gateway binds to 127.0.0.1 only. To
expose externally, set PITGUN_GATEWAY_ALLOW_NON_LOOPBACK=1. The Gateway extracts
X-Forwarded-For and X-Real-IP headers for audit logging. Backpressure is
managed via a bounded ingestion queue (1024 slots)—overflow drops batches with a warning.
pitgun-core is the processing layer: a Rust library for formula evaluation, pipeline
orchestration, type conversion, and aggregation. It is designed so analysis behavior can live in
manifests instead of being buried in application code.
pitgun_core::formula
AST parser for expressions (e.g., Power = Torque * RPM)
pitgun_core::pipeline
Multi-source frame merger with configurable channels
pitgun_core::converter
Type-safe parameter transformations
pitgun_core::segment
Lap/session aggregation logic
Then import the pipeline or formula modules directly—no Gateway or network overhead required.
Define derived channels and filters in YAML without recompiling. The same manifest can be applied during live ingestion or post-mortem replay:
channel_capacity — Buffer size (default: 4096)max_sources — Max concurrent sources (default: 16)enable_merging — Combine frames from same timestampmerge_window — Time window for merging (default: 1ms)validate_parameters — Check against Registry
pitgun-contract defines the shared runtime types used across Pitgun components. Public JSON
Schemas expose those boundaries to non-Rust producers and consumers, so ingestion clients, processors,
and insight services can evolve without guessing payload shapes.
The canonical data structure produced by all sources:
session_id: u64 — Unique session identifiersequence: u64 — Monotonic frame countertimestamp_us: i64 — Capture time (μs since epoch)samples: Vec<Sample> — Parameter valuesevents: Vec<Event> — Discrete eventslap_number, sector... — Motorsport contextA parameter reading with type-safe values:
Canonical dictionary of parameter
definitions exposed through api.pitgun.io and mirrored in public schemas:
Versioned payloads for ingestion, processing, and insight exchange:
pitgun-envelope-v1 — accepted runtime eventsmetrics-dictionary/sim.v1 — canonical telemetry channelspipeline-manifest/v1 — processing pipeline definitioninsight-contract/v1 — metrics to insights exchange
Pitgun publishes its public contracts as versioned JSON Schemas. The canonical URL format is
https://pitgun.io/schemas/<schema>/<version>; the server rewrites that stable
URL to the underlying .json file. These schemas are the public boundary between the
dictionary, producers, processors, and insight services.
/schemas/pitgun-envelope/v1
Runtime event envelope accepted by
pitgun-gateway.
/schemas/metrics-dictionary/sim.v1
Canonical sim.* telemetry
parameters for ingestion and insights.
/schemas/analysis-manifest/v2
Declarative analysis description for reports and downstream processing.
/schemas/pipeline-manifest/v1
Processing graph contract for filters, derivations, and sinks.
/schemas/bundle-manifest/v1
Package-level manifest for distributing schema, processing, or analysis bundles.
/schemas/insight-contract/v1
Canonical metrics-to-insights exchange contract for LLM or reporting services.
/schemas/bolt-manifest/v1
Lightweight component manifest for portable processing or insight modules.
/schemas/analysis-manifest/v1
Previous analysis manifest version kept available for compatibility.
Schema files are plain JSON Schema draft 2020-12 documents and can be consumed by standard validators in CI. The framework repository validates these public contracts before publishing.
pitgun-solver is an experimental compute layer. The design target is WebAssembly-based
scenario evaluation for simulation workloads, but it is not the core promise of the framework today.
The stable foundation remains the dictionary, schemas, gateway, and manifests.
RiskAnalysisRequest — Input: base config, lap count, scenario count, risk
factorsRiskFactors — Tire degradation variance, rain probability, safety car chance
SimulationResult — Output: average time, success probability, tire failure rate
RaceStrategySolver — WASM entrypoint for solve_strategy()Deploy the
.wasm module to browsers. Each client becomes a volunteer compute node.
The Solver receives a JSON-serialized
RiskAnalysisRequest, runs N scenarios with randomized risk factors (rain, tire wear,
safety car), and returns aggregated statistics. Each scenario is CPU-bound—designed to run in a
dedicated Web Worker thread.
Pitgun Game is the live reference workload for the framework. Browser clients generate telemetry,
pitgun-gateway ingests event envelopes, PostgreSQL and QuestDB store operational and
analytical data, and the performance API exposes the resulting runs and reports.
Deploy the framework services with Docker Compose, or run individual services with Cargo. The Gateway can run as a small standalone ingress service, while processing, storage, and observability can be added as the use case matures.
RUST_LOG=info,pitgun_gateway=debug — Tracing filterPITGUN_GATEWAY_BIND=0.0.0.0:8080 — Bind addressPITGUN_GATEWAY_DATA_DIR=./data — Storage pathPITGUN_GATEWAY_ALLOW_NON_LOOPBACK=1 — External access
pitgun-replay is a CLI tool for injecting historical CSV data into the Gateway, enabling
testing and development without live data sources.
--target <addr> — Gateway address--input <file> — CSV file to replay--rate <hz> — Playback speed (default: realtime)--loop — Loop continuouslyCSV files should have a timestamp_us
column and parameter columns matching the Registry. The replay tool auto-detects columns and maps
them to parameter IDs via name lookup.
pitgun-core pipeline manifests.