Four reference pilot patterns, plus a path for configurations outside the reference set.
The patterns below are starting points demonstrated on this codebase, not a fixed menu. Each pilot is scoped to the lab's research question, data modality, and compute envelope; the engagement phasing and compute floor that follow apply to any configuration. Three patterns are tied to demonstrated runs; one is marked as a roadmap protocol pending cross-site execution.
Drug discovery pipeline
DemonstratedGraph-convolutional bioactivity model with knowledge-graph integration applied to lab-specific compound libraries.
Multi-modal clinical research
DemonstratedAttention-based fusion across two or more modalities (clinical, omics, imaging, text) sharing patient- or sample-level identifiers.
Autonomous experimentation
DemonstratedExperiment-selection loop applied to lab datasets to explore model and configuration space under stopping rules.
Cross-institutional reproducibility
RoadmapProposed protocol for reproducing a reference pipeline across two compute environments, with hash-level artifact comparison.
Custom configuration
Most common pathMost lab engagements do not map cleanly onto a single reference pattern. A custom configuration scopes the pilot to the lab's research question, available modalities, and compute envelope, drawing on the same architecture, engagement phasing, and compute floor as the reference patterns above.
Operating envelope
Shared engagement phasing and the minimum compute floor that any of the four configurations assumes.
Phased delivery
Common phasing across pilot types; durations are typical ranges for academic engagements.
| Phase | Duration | Activities |
|---|---|---|
| Scoping | 1–2 weeks | Define research question, select configuration, confirm compute compatibility. |
| Deployment | 2–4 weeks | Configure pipeline for lab data, write modality adapters, validate end-to-end run. |
| Evaluation | 2–4 weeks | Run campaigns, compare against baselines, generate validation report. |
| Continuation | Ongoing | Extend to additional datasets or modalities; archive run manifests for reuse. |
Minimum requirements
Baseline environment definition assumed by every pilot configuration; recommended values reflect demonstrated run conditions.
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 4 cores | 8+ cores |
| RAM | 16 GB | 32 GB |
| GPU | NVIDIA, 8 GB VRAM | A10G (23 GB) or better |
| Storage | 50 GB | 200 GB |
| Python | 3.11+ | 3.11.14 |
| OS | Ubuntu 22.04+ | Ubuntu 22.04 LTS |
Scope and non-claims
Boundaries that apply uniformly to every pilot configuration. These are stated to keep the engagement model research-facing and pre-deployment.
Not a medical device
SNPTX produces analytical artifacts intended for expert interpretation. It is not cleared for diagnostic, prognostic, or clinical decision use.
Trained on lab data
Pilots train models from declared inputs supplied by the lab. No pre-trained models are distributed for clinical deployment.
In-place processing
Data remains on lab infrastructure. SNPTX is deployed into the lab's environment and does not host or transfer raw data.
Configurations A, B, and C are tied to runs reproducible from this codebase under the stated compute floor. Configuration D is explicitly a protocol; cross-site results will be reported separately when the trial is executed. Configuration E inherits platform-level evidence and produces pilot-specific evidence during the engagement.