Every workload follows the same pipeline. There are no shortcuts, no backdoors, and no ungoverned execution paths. Each stage produces artifacts that feed the next, creating an unbroken chain from intent to outcome.
01
Author in .matr
.matr is the authoring language for Matter: programmable logic designed to execute on Material substrate. Programs are small, composable, and testable. You write logic the way you write infrastructure: explicit inputs, explicit outputs, explicit constraints.
Authors can work directly in the console editor or through agent-assisted generation via BERNIE. Every .matr program carries its own constraint declarations — there is no separate config layer. What you write is what gets compiled.
Output: A validated .matr program ready for compilation.
02
Compile to Molebytes
A Molebyte is the deployable unit for Material execution — analogous to a binary, but designed for physical substrates. Molebytes carry logic, metadata, and execution constraints so the runtime can schedule them safely.
Compilation is deterministic: the same .matr source always produces the same Molebyte. Artifacts are content-addressed and immutable. Once compiled, a Molebyte can be modeled in Material Twin, versioned, compared, and stored — all before any physical execution occurs.
Output: A versioned Molebyte artifact with a stable identity.
03
Model in Material Twin and validate preflight
Before any run request is submitted, artifacts pass through Material Twin simulation and preflight validation. Material Twin produces expected distributions, timing profiles, and resource estimates by modeling Matter inside Material constraints. Validation checks structural correctness and constraint satisfaction.
Material Twin simulation is not a demo — it is evidence. Results are stored as part of the artifact's lineage and used as baselines for comparing real execution outcomes. Preflight validation catches constraint violations, missing approvals, and profile mismatches before they reach the runtime.
Output: Material Twin traces, baseline outputs, and validation evidence.
04
Enforce governance policies
Every run request passes through the governance engine before reaching the runtime. Policies define what may run, under which profiles, with what approvals, and at what cost ceiling. Consequence tiers (T0–T4) determine verification depth based on the risk class of the workload.
Governance is not optional and cannot be bypassed. T0 (exploratory) workloads pass with minimal review. T4 (irreversible physical consequence) workloads require multi-party approval, full Material Twin evidence, and audit-locked provenance. Compute tiers (C0–C4) independently control resource allocation and scheduling priority.
Output: An approved run request — or a policy denial with explicit reasons.
05
Execute via MFCore on Material Cloud
MFCore is the runtime scheduler. It maps approved run requests to Material Cloud execution pools — capacity-managed environments where Material computation actually occurs. Pools are profile-driven: each workload runs under explicit resource, timing, and isolation constraints.
Material execution is fundamentally parallel. Unlike silicon, where parallelism is simulated through time-slicing, molecular substrates process billions of operations simultaneously. MFCore manages this by treating execution as a batch scheduling problem, not a request/response cycle.
Output: A run execution record with pool, profile, and accounting metadata.
06
Capture signals and observe outcomes
Material execution does not produce stdout logs. It produces signals: outcome distributions, confidence bounds, state deltas, energy accounting, and provenance metadata. The Signal Event Layer (SEL) captures and structures these outputs for review by humans and agents.
Signals are the native output format of every run. They are designed for decision-making, not debugging. Each signal carries enough context to answer: did the run produce the expected outcome, at the expected confidence, within the expected cost envelope?
Output: Structured signals with confidence, cost, and provenance — ready for comparison.
07
Compare, decide, and route
Outcomes are compared against baselines — previous runs, expected distributions, and organizational thresholds. Comparison produces a diff and a judgment: accept the result, rerun with adjusted parameters, escalate to a human reviewer, or stop the workload entirely.
This is where the judgment loop closes. BERNIE (the AI layer) can propose next actions, but governance policies determine which actions are permitted. High-consequence decisions always route to human review. The system is designed so that automation operates within boundaries, not around them.
Output: A decision outcome with explicit routing or escalation.
08
Store provenance and close the loop
Every artifact, run, signal, policy decision, and judgment outcome is stored as an immutable provenance record in the Audit Ledger. Records are linked: you can trace any signal back to the exact Molebyte version, execution profile, governance context, and approval chain that produced it.
Provenance is not a feature — it is a structural requirement. Without attributable history, Material computation cannot be reproduced, audited, or trusted at scale. The Audit Ledger ensures that every decision made by the platform is explainable after the fact.
Output: Full provenance chain — from .matr source to execution outcome — stored and auditable.