Research
The Foundation conducts pre-registered empirical research on the rigorous measurement and reliable production of verifiable software, and scholarly research on the methodology of empirical inquiry itself. All studies are deposited on the Open Science Framework before data collection begins, and all findings are released through peer-reviewed publication and open-access repositories with persistent digital identifiers. Replication packages accompany each study.
The Foundation commits to the public-interest publication standard articulated in Treasury Regulation §1.501(c)(3)-1(d)(5)(iii) and Revenue Ruling 76-296: substantially all the information that would be useful to the interested public is published, and any patents, copyrights, processes, formulae, or other intellectual property generated by the Foundation's research is made available to the public on a nondiscriminatory basis. No research output is gated behind paywalls, institutional subscriptions, or non-disclosure agreements.
Active research programs
1. Software-quality measurement: construct and predictive validity
Pre-registered controlled studies investigating whether the Slop Audit, a multi-dimensional measurement instrument anchored in established compliance frameworks, demonstrates construct validity for the structural-risk categories that conventional measures do not detect, and whether assessment scores predict deployment-time outcomes (defect emergence, security-finding density, durability under collaborative extension).
- OSF block: osf.io/dbsyg · DOI 10.17605/OSF.IO/DBSYG
- Papers A-C in the Slop Audit research arc; manuscript drafting in progress.
- Institutional collaborator of record: Mili, UQAM/LATECE (instrument-validation program).
2. The Language-Only Hypothesis: cross-linguistic transformer-training experiments
Pre-registered controlled cross-linguistic transformer-training experiments investigating the contribution of language structure itself, separately from architecture or compute scale, to the apparent capabilities of large language models. The findings inform the public scientific record on computational linguistics, the philosophy of mind, and the structural properties of human language as a cognitive artifact.
- OSF block: osf.io/sj48b · DOI 10.17605/OSF.IO/SJ48B (registered 2025-12-06)
- OSF Exp9 follow-up: osf.io/9pgts (registered 2026-01-05): engineered morphology vs natural languages at 350M scale, testing WALS-based predictions.
- Reporting paper: Wasserman 2026, The Scaling Hypothesis Is Language-Contingent, Zenodo DOI 10.5281/zenodo.19423151.
- Conference paper: Wasserman & Beauchemin 2026, Right Tool, Right Job: Why Training Language Matters More Than Training Data, BabyLM 2026 / ACL Rolling Review (EMNLP 2026 Budapest target).
- Institutional collaborator of record: David Beauchemin, Université Laval.
3. Construction-discipline interventions on AI-generated code
Pre-registered controlled studies testing whether a specified construction-time discipline (the Honest Framework) shifts the trustworthiness of AI-generated code in measurable ways across multiple programming languages, multiple frontier AI systems, and multiple measurement endpoints.
- OSF block: DBSYG (Paper B in the Slop Audit research arc).
- Pre-registration in preparation; design parameters under finalization with collaborators.
4. Reference-codebase quality as input to AI code generation
Studies investigating the systematic effects of starting-point codebase properties on the trustworthiness of subsequent AI-assisted modification, and the design of reference materials that resist the documented degradation pattern. Research arm in preparation.
5. Prompting strategies as systematic interventions on AI code generation
Studies testing whether specified prompting strategies produce measurably better-than-mean code across deployment contexts and which design properties of those strategies generalize. Research arm builds on the Foundation's published Process Discipline natural-experiment work and prior axiomatic-prompting research.
Methodology research
The convergence-of-signals method
Methodological framework formalizing the inference licensed when multiple independent instruments (empirical experiments, philosophical inquiries, theological traditions) meet at the same boundary. Peer-reviewable specification with worked examples in preparation, to be released as a standalone open-access deposit under permissive license with a persistent digital identifier.
The instrument-aware scholarly discipline
Methodological position that every scientific instrument has limits, and that the discipline of asking what each instrument can and cannot adjudicate is foundational rather than peripheral. Scholarly outputs articulating this discipline and its application across multiple domains, including the recognition that large language models are instruments of observation rather than instruments of generation. See Epistemology.
Cross-traditional scholarly convenings
The Foundation hosts convenings that bring together scholars from philosophy, theology, computational linguistics, cognitive science, and adjacent humanities and social-science fields around the convergence-of-signals method. The convenings produce peer-reviewable proceedings and method specifications. The Replicators, a Foundation publication authored by the founder as part of his charitable scholarly service, serves as the focal vehicle for these convenings.
Publications
- Wasserman 2026, The Scaling Hypothesis Is Language-Contingent, Zenodo DOI 10.5281/zenodo.19423151
- Wasserman & Beauchemin 2026, Right Tool, Right Job: Why Training Language Matters More Than Training Data, BabyLM 2026 / ACL Rolling Review submission
- OSF DBSYG block: Open Honest: Pre-Registered Research Program for Enterprise Software Finite Testability and AI-Assisted Development Quality, registered 2026-04-12, osf.io/dbsyg
- OSF SJ48B block: The Language-Only Hypothesis: Emergent Capabilities in Large Language Models Are Properties of Natural Language Structure, Not of Neural Networks or Scale — First Empirical Test, registered 2025-12-06, osf.io/sj48b
- OSF 9pgts: Exp9: Can Engineered Morphology Outperform Natural Languages? Testing WALS-Based Predictions at 350M Scale, registered 2026-01-05, osf.io/9pgts
Confirmed institutional collaborators of record
- David Beauchemin, Université Laval, co-author on Right Tool, Right Job and collaborator on the cross-linguistic transformer-training research program.
- Mili, UQAM/LATECE, institutional collaborator of record on the Slop Audit instrument-validation research program; institutional PI of record for the Schmidt Sciences 2027 Trustworthy AI Tier 2 program (UQAM/LATECE as institutional applicant).
Open-research commitments
- Pre-registration on the Open Science Framework before data collection.
- Open-access deposit on Zenodo with persistent DOIs.
- Replication packages with each publication.
- Apache-2.0 for code; CC-BY-NC-4.0 for content; CC-BY-SA-4.0 for Foundation publications where convening proceedings are integrated.
- No paywall, no institutional-subscription gating, no non-disclosure agreements on research output.