Manifest-Driven Telemetry

Pitgun docs
built for data contracts.

Technical reference for the Pitgun framework: canonical metric definitions, public schemas, event envelopes, processing manifests, and runtime telemetry services.

Canonical Dictionary Public Schemas Processing Manifests
PG
Documentation
Validate the Contracts
$ curl https://api.pitgun.io
$ curl https://pitgun.io/schemas/pitgun-envelope/v1
$ curl https://pitgun.io/schemas/pipeline-manifest/v1

Start from canonical definitions, validate event boundaries, then run the same manifests against real-time streams or historical data.

Documentation Structure

Public contracts and Rust services powering Pitgun telemetry. The dictionary defines meaning, schemas define boundaries, and manifests define reproducible processing.

Introduction

From Raw Telemetry to Contracted Runtime Events

Pitgun began as an F1 telemetry framework and is now a manifest-driven telemetry processing framework. The game remains a reference workload, but the framework is designed for any producer that can emit versioned events, high-frequency samples, or historical datasets.

The Runtime Philosophy
  • Dictionary as Source of Truth: Define canonical names, units, types, and dimensions once.
  • Contracts as API: Publish stable JSON Schemas for runtime boundaries.
  • Gateway as Ingress: Accept pitgun-envelope-v1 events over HTTP or WebSocket.
  • Core as Processor: Execute manifest-driven transformations and derived metrics.
  • Insights as Output: Feed reporting, dashboards, datasets, or LLM services from typed analysis contracts.
Why This Exists
  • Telemetry producers need stable semantics before they need dashboards.
  • Schema-first boundaries make historical data usable for analysis and dataset creation.
  • Processing manifests keep transformations reproducible, reviewable, and portable.
  • Domain-agnostic: motorsport is the reference app, but the contracts can serve IoT, energy, robotics, or simulation.
Dictionary

Canonical metric definitions exposed through api.pitgun.io.

Contracts

Public JSON Schemas for envelopes, metric dictionaries, manifests, bundles, and insights.

Processing

Manifest-driven transformations for derived metrics, summaries, and analysis payloads.

Quick Start
Resolve the Contracts

Start from the dictionary, inspect public schemas, then send a versioned event envelope to the Gateway.

$ curl https://api.pitgun.io
$ curl https://pitgun.io/schemas/metrics-dictionary/sim.v1
$ cargo run -p pitgun-gateway --release

Canonical Dictionary

Live

The dictionary is the semantic layer of the framework. It defines what a telemetry channel means before any service stores, charts, or analyzes it: canonical name, unit, type, dimension, expected range, and domain context.

Dictionary API

api.pitgun.io exposes the canonical dictionary for producers, processors, docs, and downstream analysis tools. The API is intentionally separate from the game domain.

$ curl https://api.pitgun.io
$ curl https://pitgun.io/schemas/metrics-dictionary/sim.v1
What It Prevents
  • Unit drift: milliseconds, seconds, and microseconds cannot be mixed silently.
  • Name drift: producers do not invent incompatible channel names.
  • Analysis drift: manifests reference stable metrics instead of ad-hoc fields.
  • Dataset decay: historical data remains interpretable after the product evolves.
Real-time and Post-mortem Use

Real-time ingestion and post-mortem analysis use the same semantic base. A WebSocket event, a stored QuestDB row, and a replayed dataset should all resolve to the same canonical metric definitions before processing starts.

Gateway

Stable

The pitgun-gateway service is the framework's ingress layer: an Axum-based HTTP server that accepts runtime event envelopes, queues accepted events, writes telemetry to configured sinks, and exposes Prometheus metrics for operational visibility.

Standalone Usage

Deploy the Gateway without other framework components as a contract-aware ingestion endpoint. Connect any HTTP or WebSocket producer that emits pitgun-envelope-v1.

$ PITGUN_GATEWAY_BIND=0.0.0.0:8080 \
  cargo run -p pitgun-gateway --release
Configuration
  • PITGUN_GATEWAY_BIND
    Address to bind (default: 127.0.0.1:8080)
  • PITGUN_GATEWAY_DATA_DIR
    Storage path for telemetry
  • PITGUN_GATEWAY_ALLOW_NON_LOOPBACK
    Enable external binds (set to 1)
  • PITGUN_GATEWAY_RUN_REGISTRY_URL
    Optional downstream run registry mirror
Endpoints
  • GET /health Liveness probe (returns 200 OK)
  • POST /beacon Batch ingest JSON event envelopes
  • GET /ws Upgrade to WebSocket for real-time event ingestion
  • GET /metrics Prometheus counters and latency summaries
Beacon Payload (JSON)
// POST /beacon or WebSocket text frame
{
  "schema_version": "pitgun-envelope-v1",
  "event_id": "b88c0d71-14c9-4a26-a290-a7f38ee17bbc",
  "ts": "2026-04-11T12:43:44.279Z",
  "player_id": "7d16dde5-88a6-4b47-8cd8-c711601cd61f",
  "session_id": "53702b3e-3e56-41a2-8834-9169ee5e991e",
  "event_type": "telemetry.sample_batch",
  "payload": { "frames": [...] }
}
WebSocket Messages
  • Text Mode
    JSON pitgun-envelope-v1 (same schema as beacon)
  • Binary Mode
    Reserved for future binary ingestion. Current public contract is JSON.
  • Ping / Pong
    Keep-alive heartbeat (auto-handled)
Security

By default, the Gateway binds to 127.0.0.1 only. To expose externally, set PITGUN_GATEWAY_ALLOW_NON_LOOPBACK=1. The Gateway extracts X-Forwarded-For and X-Real-IP headers for audit logging. Backpressure is managed via a bounded ingestion queue (1024 slots)—overflow drops batches with a warning.

Manifest Processing

Stable

pitgun-core is the processing layer: a Rust library for formula evaluation, pipeline orchestration, type conversion, and aggregation. It is designed so analysis behavior can live in manifests instead of being buried in application code.

Library Modules
  • pitgun_core::formula AST parser for expressions (e.g., Power = Torque * RPM)
  • pitgun_core::pipeline Multi-source frame merger with configurable channels
  • pitgun_core::converter Type-safe parameter transformations
  • pitgun_core::segment Lap/session aggregation logic
Embed in Your App
# Cargo.toml
[dependencies]
pitgun-core = "0.22"

Then import the pipeline or formula modules directly—no Gateway or network overhead required.

Manifest-Driven Pipelines

Define derived channels and filters in YAML without recompiling. The same manifest can be applied during live ingestion or post-mortem replay:

# pipeline.yaml
version: v1
pipeline:
  - type: formula
    derived_channels:
      - name: "Power_kW"
        expr: "Torque_Nm * Engine_RPM / 9549.0"
  - type: filter
    whitelist: ["Speed", "Power_kW", "LapTime"]
PipelineConfig Options
  • channel_capacity — Buffer size (default: 4096)
  • max_sources — Max concurrent sources (default: 16)
  • enable_merging — Combine frames from same timestamp
  • merge_window — Time window for merging (default: 1ms)
  • validate_parameters — Check against Registry
Rust API Example
// Multi-source pipeline
let config = PipelineConfig::new()
  .with_channel_capacity(8192)
  .with_merging(Duration::from_millis(2))
  .with_validation();

let mut pipeline = TelemetryPipeline::new(config);
pipeline.add_source(udp_source);
pipeline.add_source(ws_source);

Contract Types

Stable

pitgun-contract defines the shared runtime types used across Pitgun components. Public JSON Schemas expose those boundaries to non-Rust producers and consumers, so ingestion clients, processors, and insight services can evolve without guessing payload shapes.

TelemetryFrame

The canonical data structure produced by all sources:

  • session_id: u64 — Unique session identifier
  • sequence: u64 — Monotonic frame counter
  • timestamp_us: i64 — Capture time (μs since epoch)
  • samples: Vec<Sample> — Parameter values
  • events: Vec<Event> — Discrete events
  • lap_number, sector... — Motorsport context
Sample & SampleValue

A parameter reading with type-safe values:

let sample = Sample::new(
  1, // parameter_id
  SampleValue::U16(8500), // RPM
  SignalQuality::Good
);

// SampleValue variants:
U8, U16, U32, I8... F32, F64, Bool
ParameterRegistry

Canonical dictionary of parameter definitions exposed through api.pitgun.io and mirrored in public schemas:

  • ID → Name/Unit/Type: Lookup by parameter ID
  • Range Validation: Min/max bounds checking
  • Conversions: Raw → engineering units
  • Access Levels: Public, Internal, Restricted
Runtime Contracts

Versioned payloads for ingestion, processing, and insight exchange:

  • pitgun-envelope-v1 — accepted runtime events
  • metrics-dictionary/sim.v1 — canonical telemetry channels
  • pipeline-manifest/v1 — processing pipeline definition
  • insight-contract/v1 — metrics to insights exchange
Registry YAML Format
# registries/f1_generic.yaml
version: "1.0"
name: "F1 Generic Registry"
parameters:
  - id: 1
    name: "Engine_RPM"
    unit: "rpm"
    data_type: u16
    range: { min: 0, max: 15000 }
  - id: 2
    name: "Throttle_Position"
    unit: "%"
    data_type: f32
    range: { min: 0.0, max: 100.0 }

Public Schemas

Live

Pitgun publishes its public contracts as versioned JSON Schemas. The canonical URL format is https://pitgun.io/schemas/<schema>/<version>; the server rewrites that stable URL to the underlying .json file. These schemas are the public boundary between the dictionary, producers, processors, and insight services.

Validation

Schema files are plain JSON Schema draft 2020-12 documents and can be consumed by standard validators in CI. The framework repository validates these public contracts before publishing.

Experimental Solver

Experimental

pitgun-solver is an experimental compute layer. The design target is WebAssembly-based scenario evaluation for simulation workloads, but it is not the core promise of the framework today. The stable foundation remains the dictionary, schemas, gateway, and manifests.

Key Types
  • RiskAnalysisRequest — Input: base config, lap count, scenario count, risk factors
  • RiskFactors — Tire degradation variance, rain probability, safety car chance
  • SimulationResult — Output: average time, success probability, tire failure rate
  • RaceStrategySolver — WASM entrypoint for solve_strategy()
WASM Compilation
$ wasm-pack build crates/pitgun-solver \
  --target web --out-dir pkg

Deploy the .wasm module to browsers. Each client becomes a volunteer compute node.

Monte Carlo Flow

The Solver receives a JSON-serialized RiskAnalysisRequest, runs N scenarios with randomized risk factors (rain, tire wear, safety car), and returns aggregated statistics. Each scenario is CPU-bound—designed to run in a dedicated Web Worker thread.

RiskAnalysisRequest
{
  "base_config": { /* CanonicalConfigV1 */ },
  "laps": 58,
  "scenarios_count": 10000,
  "risk_factors": {
    "tire_degradation_variance": 0.15,
    "rain_probability_per_lap": 0.02,
    "safety_car_chance": 0.05
  }
}
SimulationResult
{
  "average_total_time": 5220.45,
  "success_probability": 0.85,
  "tire_failure_rate": 0.02,
  "strategies_histogram": [
    /* distribution buckets */
  ]
}
JavaScript Integration
// In a Web Worker
import init, { RaceStrategySolver } from './pitgun_solver.js';

await init();
const solver = new RaceStrategySolver();
const result = solver.solve_strategy(JSON.stringify(request));
postMessage(JSON.parse(result));

Experimental Authority

Design

pitgun-authority is a design track for governance: signed manifests, rate limits, and proof that outputs were derived from approved inputs. It is intentionally presented as roadmap work, not as a required runtime dependency.

Design Intent
  • Manifest Signing: Cryptographically sign pipeline.yaml and tuning policies.
  • Tuning Limits: Enforce parameter bounds (e.g., max RPM, min tire pressure).
  • Auditability: Prove that result A derived from signed Config B.
  • Rate Limiting: Capability-based access control per client.
Policy Example
# policies/tuning.v1.yaml
version: v1
limits:
  max_engine_rpm: 15000
  min_tire_pressure_kpa: 140
  max_ers_deployment_kj: 4000
signed_by: authority.pitgun.io

Reference Workload

Reference Impl

Pitgun Game is the live reference workload for the framework. Browser clients generate telemetry, pitgun-gateway ingests event envelopes, PostgreSQL and QuestDB store operational and analytical data, and the performance API exposes the resulting runs and reports.

What the Game Proves
  • Live ingestion: WebSocket and beacon events from real browser sessions.
  • Contract discipline: telemetry crosses service boundaries through versioned schemas.
  • Analytical storage: PostgreSQL for runs and QuestDB for high-frequency telemetry.
  • Operations: staging and production deployments with observability.

Deployment

Ops

Deploy the framework services with Docker Compose, or run individual services with Cargo. The Gateway can run as a small standalone ingress service, while processing, storage, and observability can be added as the use case matures.

Docker Compose
$ docker-compose up -d pitgun-gateway
$ docker-compose logs -f

# Scale solver nodes
$ docker-compose up -d --scale solver=4
Cargo (Development)
# Terminal 1: Gateway
$ cargo run -p pitgun-gateway --release

# Terminal 2: Replay data
$ cargo run -p pitgun-replay -- \
  --target 127.0.0.1:8080 \
  --input datasets/session.csv
Environment Variables
  • RUST_LOG=info,pitgun_gateway=debug — Tracing filter
  • PITGUN_GATEWAY_BIND=0.0.0.0:8080 — Bind address
  • PITGUN_GATEWAY_DATA_DIR=./data — Storage path
  • PITGUN_GATEWAY_ALLOW_NON_LOOPBACK=1 — External access

Replay Tool

Stable

pitgun-replay is a CLI tool for injecting historical CSV data into the Gateway, enabling testing and development without live data sources.

Basic Usage
$ cargo run -p pitgun-replay -- \
  --target 127.0.0.1:8080 \
  --input datasets/telemetry/nEngine.csv
CLI Options
  • --target <addr> — Gateway address
  • --input <file> — CSV file to replay
  • --rate <hz> — Playback speed (default: realtime)
  • --loop — Loop continuously
CSV Format

CSV files should have a timestamp_us column and parameter columns matching the Registry. The replay tool auto-detects columns and maps them to parameter IDs via name lookup.

Roadmap

Design