Frameworks
The Foundation hosts three open standards. Each is independent: each can be adopted on its own, without committing to the others.
Honest Framework
Honest is a specification for writing code that is correct by design. It is language-agnostic: any object-oriented or imperative language can implement it. The patterns and rules are the same; only the syntax changes.
In most cases you can also keep the application framework you already use within your language: Rails in Ruby, Django or FastAPI in Python, Spring in Java, Express in Node, ASP.NET in C#. Honest layers on top of these, not in place of them. You do not have to rewrite your application to start using it.
What does Honest actually require? In the simplest terms, two rules. First: write your decision logic as tables (rows of conditions, columns of outcomes) instead of as nested if-then-else chains and loops. Second: any single location in memory can only be changed by exactly one line of code in the whole system. These two rules together stop your program's possible states from multiplying out of control. They give you code you can check by reading instead of code you can only test by running every possible case.
Most ways to write provably correct software require you to learn a new language (Haskell, Idris, Coq) or to fight with complex type systems. Honest does not. You stay in your normal language and your normal framework. The specification gives you patterns and conformance rules that make your code checkable by reading its structure, instead of by trying to test every possible state.
Honest is general-purpose. It applies anywhere you write code that has to be correct:
- Scientific code (climate models, physics simulations, AI systems)
- Financial systems
- Healthcare records and decision-support tools
- Civic infrastructure
- Security tools
- Voting and election systems
- Accessibility software
- Any safety-critical or compliance-sensitive system
Status:
- Specification: structurally complete and stable. Substantial Markdown documentation, a formal conformance suite of approximately 50 laws across 7 modules.
- Python reference implementation: in active development. Working honest-type and honest-test modules; remaining modules in progress.
- Reference implementations in JavaScript, Ruby, and other languages: planned. The conformance suite is the contract any new implementation must satisfy.
- Released under Apache-2.0.
- honestframework.software — full specification, conformance suite, reference implementation
- GitHub repository — coming soon under the Open Honest organization
Slop Audit
The Slop Audit is the Foundation's measurement instrument for software quality. It scores any codebase against the published thresholds of named compliance frameworks (SOC 2 Trust Services Criteria, NIST SP 800-53, OSFI B-13, OWASP ASVS, ISO/IEC 25010, WCAG 2.2, Section 508, EN 301 549, AODA, and Quebec Law 25) across 18 dimensions covering security architecture, data architecture, compliance engineering, operational security, performance engineering, operations, DevOps, infrastructure, software architecture, governance, process engineering, lifecycle management, and software development.
The Slop Audit is independent of the Honest Framework. It applies to any codebase, in any language, regardless of architectural style. The audit produces an evidence-based, reproducible score; the Honest Framework is one rigorous way to pass the audit by construction, but other architectures and methodologies can pass too.
What the audit measures:
- 20 Layer 1 quantitative indicators computed mechanically from git history and static analysis (mutable-state ratio, decision-space coverage, test determinism, delete/add ratio, secret scan, type-escape density, fuzzy duplication, god-file concentration, and 12 others)
- 18 Layer 2 per-dimension artifact inspections with mechanical scoring (Present / Partial / Absent / Not Applicable) backed by cited evidence
- 18 Layer 3 qualitative specified-marker assessments by trained assessors
- SOC 2 deliverable extraction: a compliance-evidence package a CIO can hand to their SOC 2 auditor as a free byproduct of the Phase 0 audit
The first cross-language measurement of one Layer 1 indicator (the mutable-state ratio) on 200 public open-source codebases finds that approximately 99% of non-React enterprise codebases are structurally incapable of exhaustive behavioural verification. The methodology and underlying paper are published under the Open Honest research program.
Status:
- Methodology: complete. Approximately 45,000 words of formally documented procedures across 18 dimensions, four layers, and the SOC 2 deliverable extraction.
- Paper 1 (200-repository L1.18 measurement): manuscript drafting, replication package public.
- Independent validation (Paper 2 instrument validation): pre-registration drafted; awaiting assessor confirmation from academic collaborators.
- Released under Apache-2.0.
- GitHub repository — coming soon under the Open Honest organization
MÉTRON Framework
MÉTRON is a measurement-instrument family for cross-linguistic AI research. It comprises trained model checkpoints (currently MÉTRON-FR; a 12-language family in preparation) and, in active development, a no-code platform that lets researchers run controlled cross-language experiments without ML expertise or expensive hardware. You set up the experiment in a web interface; the platform runs it on dedicated GPUs at low cost.
MÉTRON also includes a community channel where native speakers of any language can build their own grammar test sets and get free compute in exchange for sharing those test sets back to the open release.
The method comes from published work:
- Wasserman 2026, The Scaling Hypothesis Is Language-Contingent (Zenodo 19423151)
- Wasserman & Beauchemin, Right Tool, Right Job: Why Training Language Matters More Than Training Data (BabyLM 2026 / EMNLP 2026 Budapest)
MÉTRON makes this method usable by people who are not ML engineers: philosophers of language, theologians, computational linguists, cognitive scientists, and cross-tradition scholars.
Status: methodology proven by published research (BabyLM 2026 / EMNLP 2026 paper); the no-code platform is in alpha development. Target platform release: EMNLP 2026 system demonstrations track (submission August–September 2026). Released under Apache-2.0.
- Status updates and documentation: coming soon
- GitHub repository: coming soon under the Open Honest organization