Scope and citation convention
The bibliography is the literature actually invoked elsewhere on the site. It is not a survey of the field and does not attempt to cover every related result. Each group corresponds to a recurring concern in the framework — reproducibility, design, uncertainty, causality, graphs, fusion, and biomedical grounding — in roughly the order those concerns appear across the pages.
Each entry has a stable anchor of the form #ref-author-year. Other pages may link to a specific reference directly, e.g. references.html#ref-zitnik-2018. Where a DOI or arXiv identifier is available it is linked inline.
Reproducibility & scientific method
Grounding for the framework's emphasis on declared interfaces, persisted artifacts, and explicit deployment scope.
- Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature, doi:10.1038/533452a
- Hutson, M. (2018). Artificial intelligence faces reproducibility crisis. Science, doi:10.1126/science.359.6377.725
- Peirce, C. S. (1903). Pragmatism as a Principle and Method of Right Thinking.
Experimental design & optimization
Bayesian design, bandit theory, multi-objective search, and the algorithm-selection framing that motivate SNPTX's experimentation surface.
- Agrawal, S. & Goyal, N. (2012). Analysis of Thompson Sampling for the Multi-armed Bandit Problem. COLT. arXiv:1111.1797
- Chaloner, K. & Verdinelli, I. (1995). Bayesian experimental design: A review. Statistical Science, doi:10.1214/ss/1177009939
- Deb, K., Pratap, A., Agarwal, S. & Meyarivan, T. (2002). A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, doi:10.1109/4235.996017
- Rice, J. R. (1976). The algorithm selection problem. Advances in Computers, doi:10.1016/S0065-2458(08)60520-3
- Snoek, J., Larochelle, H. & Adams, R. P. (2012). Practical Bayesian Optimization of Machine Learning Algorithms. NeurIPS. arXiv:1206.2944
Uncertainty & calibration
Sources used where the framework discusses calibrated outputs, evidential confidence, and information-theoretic bounds on representation.
- Sensoy, M., Kaplan, L. & Kandemir, M. (2018). Evidential Deep Learning to Quantify Classification Uncertainty. NeurIPS. arXiv:1806.01768
- Vovk, V., Gammerman, A. & Shafer, G. (2005). Algorithmic Learning in a Random World. Springer. doi:10.1007/b106715
Causal & logical inference
Foundational references for causal reasoning, inductive logic programming, and the information-bottleneck view of representation learning.
- Cover, T. M. & Thomas, J. A. (2006). Elements of Information Theory (2nd ed.). Wiley. doi:10.1002/047174882X
- Muggleton, S. & De Raedt, L. (1994). Inductive logic programming: Theory and methods. Journal of Logic Programming, doi:10.1016/0743-1066(94)90035-3
- Pearl, J. (2009). Causality: Models, Reasoning, and Inference (2nd ed.). Cambridge University Press. doi:10.1017/CBO9780511803161
Graph learning & equivariance
Methods relevant to the molecular and network-level representations used in the biomedical workloads.
- Satorras, V. G., Hoogeboom, E. & Welling, M. (2021). E(n) Equivariant Graph Neural Networks. ICML. arXiv:2102.09844
- Thomas, N., Smidt, T., Kearnes, S., Yang, L., Li, L., Kohlhoff, K. & Riley, P. (2018). Tensor field networks: Rotation- and translation-equivariant neural networks for 3D point clouds. arXiv preprint. arXiv:1802.08219
- Tishby, N., Pereira, F. C. & Bialek, W. (2000). The Information Bottleneck method. arXiv preprint. arXiv:physics/0004057
Multimodal representation & fusion
Background for the fusion page: tensor-based multimodal combination, masked self-supervision, and continual-learning constraints.
- He, K., Chen, X., Xie, S., Li, Y., Dollár, P. & Girshick, R. (2022). Masked Autoencoders Are Scalable Vision Learners. CVPR. arXiv:2111.06377
- Kirkpatrick, J. et al. (2017). Overcoming catastrophic forgetting in neural networks. PNAS, doi:10.1073/pnas.1611835114
- Zadeh, A., Chen, M., Poria, S., Cambria, E. & Morency, L.-P. (2017). Tensor Fusion Network for Multimodal Sentiment Analysis. EMNLP. arXiv:1707.07250
Biomedical foundation models & network medicine
Domain literature on pre-trained biomedical models, network-level disease representation, and zero-shot therapeutic prediction.
- Barabási, A.-L., Gulbahce, N. & Loscalzo, J. (2011). Network medicine: a network-based approach to human disease. Nature Reviews Genetics, doi:10.1038/nrg2918
- Huang, K. et al. (2023). A foundation model for clinician-centered drug repurposing (zero-shot prediction of therapeutic use with geometric deep learning). Nature Medicine. doi:10.1038/s41591-024-03233-x
- Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H. & Kang, J. (2020). BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, doi:10.1093/bioinformatics/btz682
- Zitnik, M. & Leskovec, J. (2017). Predicting multicellular function through multi-layer tissue networks. Bioinformatics, doi:10.1093/bioinformatics/btx252
- Zitnik, M., Agrawal, M. & Leskovec, J. (2018). Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, doi:10.1093/bioinformatics/bty294