Claims are bounded by the evidence shown on specific pages, not by the site in aggregate.
SNPTX presents evidence about a research platform: execution structure, benchmark outputs, validation logic, and scoped autonomous experimentation. This page specifies where those claims are supported, where coverage is still partial, and where interpretation must stop.
How claim classes map to inspection surfaces
The diagram distinguishes demonstrated capability, partial coverage, planned work, and explicit non-claims, then routes each class to the page where a reader should inspect supporting material.
Evidence remains with the underlying material
The underlying evidence still lives in the benchmark tables, architecture diagrams, and validation pages. This page tells readers where to inspect that material and how narrowly each claim should be read.
Non-claims are part of the record
Negative statements matter here because the academic site should distinguish supported technical claims from clinical, commercial, or deployment interpretations that are not established.
Capability matrix
The matrix pairs each capability with a status, a primary inspection surface, and a limit on interpretation. It should be read row by row, not as a single maturity label for the platform.
Capability, inspection surface, and limit
The key distinction is between currently demonstrated capability and items that remain partial, planned, or explicitly excluded from the present evidence record.
| Capability | Status | Inspect Here | Current Boundary |
|---|---|---|---|
| Execution orchestration and run records | Operational | Architecture and Methodology | The Snakemake-centered execution spine, persisted artifacts, and tracked runs are part of the current research build. |
| Benchmark reporting across supported modality families | Operational | Results and Validation | Benchmark outputs are evidence for the reported evaluation surfaces, not for universal performance across settings or use cases. |
| Autonomous experiment selection loop | Operational | Autonomous and Methodology | Automated next-run selection is demonstrated for research experimentation under declared stopping and control logic. |
| DVC-backed dataset lineage in the primary execution path | Partial | Architecture and Methodology | DVC infrastructure is present, but the primary operational path remains the Snakemake-centered execution spine. |
| Feedback as a framework-wide contract | Partial | Autonomous and Architecture | Feedback is implemented for autonomous experimentation, not yet generalized across every module or interface. |
| Deterministic replay across all environments | Design constraint | Methodology and Limitations | Controls are configured to improve repeatability, but universal byte-identical replay across environments is not claimed here. |
| Cross-institution reproducibility pilot | Planned | Pilots and Validation | External reproducibility work is part of the proposed program, not part of the currently demonstrated evidence on this site. |
| Clinical utility or medical-device performance | Out of scope | Limitations and Validation | The academic site reports research infrastructure and benchmark evidence, not diagnostic, therapeutic, or clinical deployment claims. |
| Commercial deployment readiness | Not claimed | Positioning and Pilots | The site describes a research framework and pilot-facing surface rather than a production rollout or market-readiness claim. |
Where to inspect support
These are the main places where readers should inspect supporting material elsewhere on the academic site.
Execution boundaries and interfaces
Use Architecture to inspect the execution spine, extension boundary, workload surfaces, and declared deployment-facing interfaces.
Benchmark outputs and comparative surfaces
Use Results to inspect reported outputs, modality comparisons, and benchmark summaries.
Controls, verification, and evaluation logic
Use Validation to inspect evaluation controls, verification logic, and the limits placed on reported performance claims.
Reproducibility posture and run semantics
Use Methodology to inspect data flow, execution semantics, and the practical limits of repeatability claims.
Experiment-selection scope
Use Autonomous to inspect the current experiment-selection loop and the boundaries placed on autonomous operation.
Non-claims, pilot surfaces, and planned work
Use Limitations, Pilots, and Positioning to inspect excluded claims, pre-deployment scope, and work that remains proposed rather than demonstrated.
The current evidence record supports claims about research infrastructure, benchmark reporting, and scoped experimentation surfaces. It does not establish clinical efficacy, clinical safety, universal reproducibility across environments, or production deployment readiness.