Evidence and claim boundary

This page states which platform claims are supported by material elsewhere on the site, which remain partial, and which are outside the scope of the current academic record.

Evidence map
Bounded interpretation for a research-facing platform

Claims are bounded by the evidence shown on specific pages, not by the site in aggregate.

SNPTX presents evidence about a research platform: execution structure, benchmark outputs, validation logic, and scoped autonomous experimentation. This page specifies where those claims are supported, where coverage is still partial, and where interpretation must stop.

Claim routing

How claim classes map to inspection surfaces

The diagram distinguishes demonstrated capability, partial coverage, planned work, and explicit non-claims, then routes each class to the page where a reader should inspect supporting material.

Demonstrated Partial Planned Boundary or non-claim Constraint
CLAIM CLASS PRIMARY INSPECTION SURFACE INTERPRETATION LIMIT Demonstrated capability implemented in the current academic build Architecture, Results, Validation run records, benchmark outputs, evaluation controls Evidence does not travel beyond its surface reported support is bounded to the inspected page Partial coverage present, but not yet a platform-wide contract Architecture, Methodology interface notes, lineage, execution semantics Partial status blocks stronger inference coverage is limited to the described component Planned work future pilot or reproducibility work Pilots, Positioning declared scope, proposed studies, future interfaces Planned work does not upgrade current claims roadmap items are not current evidence Out of scope or not claimed Limitations, Validation, Positioning Clinical and production claims are excluded
Interpretation

Evidence remains with the underlying material

The underlying evidence still lives in the benchmark tables, architecture diagrams, and validation pages. This page tells readers where to inspect that material and how narrowly each claim should be read.

Scope control

Non-claims are part of the record

Negative statements matter here because the academic site should distinguish supported technical claims from clinical, commercial, or deployment interpretations that are not established.

Capability matrix

The matrix pairs each capability with a status, a primary inspection surface, and a limit on interpretation. It should be read row by row, not as a single maturity label for the platform.

Current evidence map

Capability, inspection surface, and limit

The key distinction is between currently demonstrated capability and items that remain partial, planned, or explicitly excluded from the present evidence record.

Capability Status Inspect Here Current Boundary
Execution orchestration and run records Operational Architecture and Methodology The Snakemake-centered execution spine, persisted artifacts, and tracked runs are part of the current research build.
Benchmark reporting across supported modality families Operational Results and Validation Benchmark outputs are evidence for the reported evaluation surfaces, not for universal performance across settings or use cases.
Autonomous experiment selection loop Operational Autonomous and Methodology Automated next-run selection is demonstrated for research experimentation under declared stopping and control logic.
DVC-backed dataset lineage in the primary execution path Partial Architecture and Methodology DVC infrastructure is present, but the primary operational path remains the Snakemake-centered execution spine.
Feedback as a framework-wide contract Partial Autonomous and Architecture Feedback is implemented for autonomous experimentation, not yet generalized across every module or interface.
Deterministic replay across all environments Design constraint Methodology and Limitations Controls are configured to improve repeatability, but universal byte-identical replay across environments is not claimed here.
Cross-institution reproducibility pilot Planned Pilots and Validation External reproducibility work is part of the proposed program, not part of the currently demonstrated evidence on this site.
Clinical utility or medical-device performance Out of scope Limitations and Validation The academic site reports research infrastructure and benchmark evidence, not diagnostic, therapeutic, or clinical deployment claims.
Commercial deployment readiness Not claimed Positioning and Pilots The site describes a research framework and pilot-facing surface rather than a production rollout or market-readiness claim.

Where to inspect support

These are the main places where readers should inspect supporting material elsewhere on the academic site.

Architecture

Execution boundaries and interfaces

Use Architecture to inspect the execution spine, extension boundary, workload surfaces, and declared deployment-facing interfaces.

Results

Benchmark outputs and comparative surfaces

Use Results to inspect reported outputs, modality comparisons, and benchmark summaries.

Validation

Controls, verification, and evaluation logic

Use Validation to inspect evaluation controls, verification logic, and the limits placed on reported performance claims.

Methodology

Reproducibility posture and run semantics

Use Methodology to inspect data flow, execution semantics, and the practical limits of repeatability claims.

Autonomous

Experiment-selection scope

Use Autonomous to inspect the current experiment-selection loop and the boundaries placed on autonomous operation.

Limits and future scope

Non-claims, pilot surfaces, and planned work

Use Limitations, Pilots, and Positioning to inspect excluded claims, pre-deployment scope, and work that remains proposed rather than demonstrated.

Interpretation boundary

The current evidence record supports claims about research infrastructure, benchmark reporting, and scoped experimentation surfaces. It does not establish clinical efficacy, clinical safety, universal reproducibility across environments, or production deployment readiness.