Bounded intelligence for cumulative experimentation, explicit roadmap, and disciplined scope.
The intelligence layer is not presented as a general autonomous research agent. In the current SNPTX framework build it refers to concrete support for persistent experiment memory, cross-run ranking, Bayesian feedback updates, dataset-aware defaults, and reusable hypothesis templates. A separate roadmap identifies additional modules grounded in optimization, causal inference, continual learning, and information theory, but those modules are specified rather than validated runtime behavior.
Five operational modules
The current build exposes experiment cataloging, cross-run analysis, feedback updating, adaptive defaults, and hypothesis templates as the active intelligence surface.
Ten roadmap modules
The roadmap extends the layer toward experiment design, optimization, causal analysis, continual learning, and information-theoretic ranking, with citations retained for each module.
No claim of autonomous science
The page does not imply autonomous discovery, clinical decision-making, or universally validated closed-loop experimentation beyond the bounded modules listed here.
Operational modules, experiment memory, and the declared roadmap boundary
The current build centers on cumulative experiment state and bounded decision support. Roadmap modules are shown as future analytical attachments rather than active control claims.
What the layer actually does today
In the present build, intelligence functions as a bounded experimental memory and decision-support surface. It accumulates prior runs, updates confidence, ranks candidate configurations, proposes reusable follow-up structures, and hands those outputs back to the broader execution framework.
Operational modules in the current build
This table is the active intelligence surface. Each row describes a module currently represented in the framework rather than a purely conceptual addition.
Operational modules (B1-B5)
The present build supports cumulative experiment memory, ranking, feedback updates, and templated follow-up logic without claiming a fully autonomous experimentation engine.
| Module | Function | Theory |
|---|---|---|
| Experiment Catalog | DuckDB-backed persistent store, 16 columns, indexed | Cumulative experimentation |
| Meta-Analysis Engine | Cross-experiment pattern discovery, best-config ranking | Bayesian rank aggregation |
| Feedback Loop v2 | Bayesian confidence updating, Thompson sampling | Beta-Binomial posterior (Agrawal & Goyal, 2012) |
| Adaptive Defaults | Dataset-aware starting configuration | Sigmoid confidence scoring |
| Hypothesis Templates | 7 declarative templates with trigger conditions | Declarative hypothesis systems |
These modules move the framework beyond one-off benchmarking by allowing runs to accumulate state across experiments. The claim is modest but important: SNPTX can retain and reuse experimental information. It does not claim to replace researcher judgment or wet-lab confirmation.
Specified roadmap and cited research basis
The roadmap remains useful because it states intended analytical directions precisely, but it must be read as literature-backed specification until the modules are integrated and validated.
Choosing what to run next
Meta-features, surrogates, experiment design, and multi-objective search would extend the current ranking surface toward more explicit experiment selection under competing objectives.
Interpreting what a run means
Causal feedback, Bayesian testing, rule mining, and information-theoretic analysis would make the layer more explicit about uncertainty, attribution, and evidence quality.
Responding to drift and novelty
Continual learning and scientific discovery modules would extend the layer toward adaptation across non-stationary settings, but these remain roadmap items rather than active claims.
Roadmap modules (B.6.1-B.6.10)
The roadmap table is retained because it anchors future work in named methods and citations rather than in vague references to intelligence.
| Module | Theory | Citation |
|---|---|---|
| Meta-Features | Algorithm selection from dataset characteristics | Rice, 1976 |
| Surrogates | GP-based Bayesian optimization with acquisition functions | Snoek et al., 2012 |
| Causal Feedback | ATE, IPW, interrupted time series validation | Pearl, 2009 |
| Experiment Design | Information gain, SPRT, active learning | Chaloner & Verdinelli, 1995 |
| Rule Mining | ILP rules with probabilistic calibration, Bayesian networks | Muggleton & de Raedt, 1994 |
| Multi-Objective | NSGA-II Pareto optimization with fairness constraints | Deb et al., 2002 |
| Continual Learning | EWC, drift detection, experience replay | Kirkpatrick et al., 2017 |
| Bayesian Testing | Signed-rank with ROPE, Friedman, bootstrap | Benavoli et al., 2017 |
| Information Theory | Entropy, MI, MDL, KL divergence, feature ranking | Cover & Thomas, 2006 |
| Scientific Discovery | Surprise detection, symbolic regression, novelty search | Peirce, 1903 |
Claims, inferences, and explicit limits
The page distinguishes what is demonstrated in the framework build from what is reasonably inferred and what remains out of scope.
Implemented support for adaptive experimentation
- Persistent experiment memory across runs.
- Cross-experiment ranking and Bayesian feedback updates.
- Dataset-aware defaults and reusable hypothesis scaffolding.
A platform that can accumulate experimental knowledge
The implemented modules support a credible claim that SNPTX is structured for cumulative experimentation rather than isolated benchmark runs. That inference depends on the concrete modules above, not on unimplemented roadmap items.
No claim of closed-loop autonomous discovery
This page does not claim independent scientific reasoning, diagnosis, therapeutic recommendation, or deployment-ready autonomous control of laboratory or clinical workflows.