docs / operations

Developer Handbook

Build discipline for .matr authoring and FORMA operation. Covers program structure conventions, artifact versioning, simulation gates, policy review workflows, and signal interpretation standards.

Program structure conventions

.matr programs follow a consistent structure that separates identity, substrate requirements, constraints, and logic. Adhering to these conventions ensures that programs are readable, comparable, and compatible with the toolchain.

Naming

Use lowercase hyphenated names for programs. Include a version suffix in the meta block. Names should describe the computational intent, not the implementation detail. Example: threshold-gate-v1, not mol-check-0.8.

File organization

One program per .matr file. Group related programs in directories by domain. Keep simulation fixtures alongside source files. Compiled artifacts go in a build/ directory.

Constraint declaration

Always declare consequence tier, cost ceiling, and drift tolerance explicitly. Never rely on defaults. Explicit constraints make policy evaluation predictable and simplify code review.

Composability

Favor small, single-purpose programs that can be composed into larger workflows. Each program should be independently testable and simulatable. Composition happens at the orchestration layer, not inside .matr source.

Versioning and lineage

Every change to a Matter program produces a new Molebyte version. There is no in-place modification. This creates an immutable lineage chain where each version can be independently referenced, compared, and audited.

RuleGuidance
Patch (0.0.x)Constraint adjustments, metadata corrections. No logic changes.
Minor (0.x.0)Gate logic changes, new state definitions, substrate requirement changes.
Major (x.0.0)Breaking changes to output format, new substrate types, incompatible constraint changes.
Lineage referenceAlways declare lineage to the previous version. Enables cross-version signal comparison.
Never reuse versionsA version string, once compiled, is permanently bound to that artifact hash.

Simulation gates

Simulation is not optional in production workflows. Every Molebyte must have simulation evidence before submission for physical execution. Teams should establish simulation gates at two points:

  1. Pre-review gate -- Run simulation before requesting code review. Attach evidence to the review request so reviewers can evaluate behavioral correctness alongside source.
  2. Pre-submission gate -- Run simulation again after any change, even constraint-only changes. The Molebyte that enters Vault must have current simulation evidence from the exact compiled artifact.

Simulation evidence ages. If substrate models are updated, previously simulated Molebytes should be re-simulated to confirm that predictions remain valid under the new model.

Policy-aware execution workflow

Developers should understand the policy context their programs will execute under before writing .matr source. This avoids authoring programs that will be denied at submission time.

  1. Check active policies -- Query the policy API for your project's constraints before authoring.
  2. Pre-evaluate -- Use the policy evaluation endpoint to test your Molebyte against active policies before submission.
  3. Handle denials -- Policy denials include structured remediation suggestions. Follow them rather than guessing at fixes.
  4. Plan for approval latency -- T2+ runs require human review. Factor approval time into execution schedules.

Signal interpretation

After a run completes, the signal is the primary source of truth. Developers should review signals with the following checklist:

  • Compare output distribution against simulation baseline. A gap between simulation prediction and physical measurement is expected but should be within declared drift tolerance.
  • Check confidence bounds. If confidence dropped below the declared threshold, investigate substrate conditions and consider re-running.
  • Review economics. If actual cost exceeded the estimate by more than 20%, flag for review before subsequent runs.
  • Verify state deltas for stateful programs. Unexpected state transitions indicate logic or substrate issues.

Drift escalation criteria

Drift alerts should be escalated when any of the following conditions are met:

  • Output distribution mean shifts by more than 2 standard deviations from the baseline.
  • Confidence drops below the declared threshold for two consecutive runs.
  • Cost exceeds the declared ceiling on any single run.
  • State transitions differ from the expected sequence for stateful programs.
  • Substrate conditions are flagged as degraded by pool monitoring.