Adaptive experimentation layer

SNPTX includes a bounded intelligence surface for experiment memory, ranking, feedback, and design scaffolding. This page separates modules present in the current framework build from literature-backed modules specified for later integration.

Intelligence view
Adaptive experimentation without autonomous overclaim

Bounded intelligence for cumulative experimentation, explicit roadmap, and disciplined scope.

The intelligence layer is not presented as a general autonomous research agent. In the current SNPTX framework build it refers to concrete support for persistent experiment memory, cross-run ranking, Bayesian feedback updates, dataset-aware defaults, and reusable hypothesis templates. A separate roadmap identifies additional modules grounded in optimization, causal inference, continual learning, and information theory, but those modules are specified rather than validated runtime behavior.

Implemented now

Five operational modules

The current build exposes experiment cataloging, cross-run analysis, feedback updating, adaptive defaults, and hypothesis templates as the active intelligence surface.

Specified next

Ten roadmap modules

The roadmap extends the layer toward experiment design, optimization, causal analysis, continual learning, and information-theoretic ranking, with citations retained for each module.

Out of scope

No claim of autonomous science

The page does not imply autonomous discovery, clinical decision-making, or universally validated closed-loop experimentation beyond the bounded modules listed here.

Intelligence architecture

Operational modules, experiment memory, and the declared roadmap boundary

The current build centers on cumulative experiment state and bounded decision support. Roadmap modules are shown as future analytical attachments rather than active control claims.

CURRENT FRAMEWORK BUILD Experiment Catalog DuckDB-backed cumulative record of prior runs; indexed state and outcomes Meta-Analysis Engine cross-run ranking and pattern extraction; best-configuration comparison across experiments Feedback Loop v2 Bayesian confidence updates and Thompson-style selection support Adaptive Defaults dataset-aware initialization and starting points before tuning Hypothesis Templates reusable trigger-driven prompts for experimental follow-up Execution Interface selection support returns signals to the experiment spine, not a self-governing lab loop OPERATIONAL TODAY SPECIFIED ROADMAP SURFACE Experiment Design and Optimization meta-features, surrogates, active design, multi-objective search Causal and Statistical Inference causal feedback, Bayesian testing, rule mining, information theory Continual and Discovery Modules continual learning, novelty search, symbolic discovery components Declared boundary cited modules remain design intent until integrated and validated in the framework runtime SPECIFIED ONLY
Interpretation

What the layer actually does today

In the present build, intelligence functions as a bounded experimental memory and decision-support surface. It accumulates prior runs, updates confidence, ranks candidate configurations, proposes reusable follow-up structures, and hands those outputs back to the broader execution framework.

Operational modules in the current build

This table is the active intelligence surface. Each row describes a module currently represented in the framework rather than a purely conceptual addition.

Implemented intelligence

Operational modules (B1-B5)

The present build supports cumulative experiment memory, ranking, feedback updates, and templated follow-up logic without claiming a fully autonomous experimentation engine.

Module Function Theory
Experiment CatalogDuckDB-backed persistent store, 16 columns, indexedCumulative experimentation
Meta-Analysis EngineCross-experiment pattern discovery, best-config rankingBayesian rank aggregation
Feedback Loop v2Bayesian confidence updating, Thompson samplingBeta-Binomial posterior (Agrawal & Goyal, 2012)
Adaptive DefaultsDataset-aware starting configurationSigmoid confidence scoring
Hypothesis Templates7 declarative templates with trigger conditionsDeclarative hypothesis systems
Why this matters

These modules move the framework beyond one-off benchmarking by allowing runs to accumulate state across experiments. The claim is modest but important: SNPTX can retain and reuse experimental information. It does not claim to replace researcher judgment or wet-lab confirmation.

Specified roadmap and cited research basis

The roadmap remains useful because it states intended analytical directions precisely, but it must be read as literature-backed specification until the modules are integrated and validated.

Selection and design

Choosing what to run next

Meta-features, surrogates, experiment design, and multi-objective search would extend the current ranking surface toward more explicit experiment selection under competing objectives.

Inference and evaluation

Interpreting what a run means

Causal feedback, Bayesian testing, rule mining, and information-theoretic analysis would make the layer more explicit about uncertainty, attribution, and evidence quality.

Adaptation over time

Responding to drift and novelty

Continual learning and scientific discovery modules would extend the layer toward adaptation across non-stationary settings, but these remain roadmap items rather than active claims.

Specified roadmap

Roadmap modules (B.6.1-B.6.10)

The roadmap table is retained because it anchors future work in named methods and citations rather than in vague references to intelligence.

Module Theory Citation
Meta-FeaturesAlgorithm selection from dataset characteristicsRice, 1976
SurrogatesGP-based Bayesian optimization with acquisition functionsSnoek et al., 2012
Causal FeedbackATE, IPW, interrupted time series validationPearl, 2009
Experiment DesignInformation gain, SPRT, active learningChaloner & Verdinelli, 1995
Rule MiningILP rules with probabilistic calibration, Bayesian networksMuggleton & de Raedt, 1994
Multi-ObjectiveNSGA-II Pareto optimization with fairness constraintsDeb et al., 2002
Continual LearningEWC, drift detection, experience replayKirkpatrick et al., 2017
Bayesian TestingSigned-rank with ROPE, Friedman, bootstrapBenavoli et al., 2017
Information TheoryEntropy, MI, MDL, KL divergence, feature rankingCover & Thomas, 2006
Scientific DiscoverySurprise detection, symbolic regression, novelty searchPeirce, 1903

Claims, inferences, and explicit limits

The page distinguishes what is demonstrated in the framework build from what is reasonably inferred and what remains out of scope.

Demonstrated

Implemented support for adaptive experimentation

  • Persistent experiment memory across runs.
  • Cross-experiment ranking and Bayesian feedback updates.
  • Dataset-aware defaults and reusable hypothesis scaffolding.
Inferred but bounded

A platform that can accumulate experimental knowledge

The implemented modules support a credible claim that SNPTX is structured for cumulative experimentation rather than isolated benchmark runs. That inference depends on the concrete modules above, not on unimplemented roadmap items.

Out of scope

No claim of closed-loop autonomous discovery

This page does not claim independent scientific reasoning, diagnosis, therapeutic recommendation, or deployment-ready autonomous control of laboratory or clinical workflows.