docs / stakeholder

Scientific Notes

Operational bridge between engineering teams and the formal research corpus. Summarizes model boundaries, evidence classes, and assumptions that inform implementation decisions. Primary truth claims are deferred to the Research section.

How to use this section

For operations teams

Translate scientific assumptions into actionable policy and execution guardrails. When configuring consequence tiers, drift tolerances, or simulation parameters, these notes identify which assumptions underpin those settings and where the boundaries of current evidence lie.

For diligence reviewers

Trace platform claims back to simulation IDs, methodologies, and open questions. Every claim referenced in documentation or investor materials can be followed to its evidence source and current confidence assessment.

Model boundaries

Material Computing operates under explicit model boundaries. These boundaries define where the platform's predictions are reliable and where uncertainty increases:

Substrate models

Simulation uses mathematical models of physical substrates. Model fidelity varies by substrate class. Molecular substrates have the most mature models; novel substrate classes have wider prediction gaps. Model versions are tracked and simulation evidence is linked to the model version used.

Distribution assumptions

Output distributions are assumed to be approximately normal for well-characterized substrates. This assumption weakens at extreme input ranges, under degraded substrate conditions, and for novel gate types without sufficient execution history.

Drift characteristics

Short-term drift (within a single execution window) is well-characterized for primary substrate classes. Long-term drift (across weeks or months of repeated execution) is an active research area with preliminary data but not yet production-validated.

Scalability limits

Current evidence supports single-pool, sequential execution at demonstrated scale. Multi-pool coordination and parallel execution paths are architecturally supported but have limited physical execution data.

Evidence classes

The platform uses a structured evidence classification to distinguish between different levels of support for claims:

ClassLabelCriteria
E1TheoreticalSupported by model analysis and first-principles reasoning. No execution data.
E2SimulatedSupported by simulation evidence with documented methodology and substrate model version.
E3Physically validatedSupported by physical execution data with statistical significance and reproducibility.
E4Production provenSupported by sustained production operation across multiple execution cycles and conditions.

Assumptions that affect operations

The following assumptions are embedded in current platform behavior. Operations teams should be aware of these and monitor for conditions that violate them:

  • Substrate stationarity -- Pool conditions are assumed to change slowly relative to execution time. Fast condition changes may invalidate confidence bounds.
  • Gate independence -- In multi-gate programs, gate outputs are assumed independent unless explicitly coupled through state. Latent coupling through substrate conditions is a known risk.
  • Cost linearity -- Execution cost is assumed to scale linearly with program complexity at current scale. This may not hold at C3-C4 compute tiers.
  • Model transferability -- Simulation model accuracy for one substrate class does not guarantee accuracy for another. Each class requires independent validation.

Traceability

Every claim in platform documentation can be traced through the following chain: claim reference, evidence class, simulation ID (if applicable), methodology summary, confidence assessment, and open questions. The Research corpus maintains the canonical version of each evidence record.