AI failures are usually architecture failures, not model failures.
What looks like “hallucination” is often classic distributed systems behavior: hidden state you pretend is stateless, retries you don’t control, duplicated side effects, missing domain boundaries, and no single source of truth.
When an AI system double-charges a user, retries the wrong action, or confidently produces an incorrect outcome, that’s not unpredictability.
That’s obedience inside a broken abstraction.
Prompt engineering doesn’t fix this.
It masks architectural flaws with longer instructions, more conditionals, and human fallbacks.
If your system needs:
massive prompts to encode business logic
constant retries to “get it right”
human review for every critical decision
You don’t have an AI problem.
You have an architecture problem that now speaks natural language.
AI doesn’t replace architecture.
It amplifies it.
This work formalizes the separation between execution and judgment, triggers and verification, signals and real outcomes, as a general, non-normative, non-clinical audit framework, publicly documented and auditable here:
I published a short audit-style report measuring the structural impact of integrating a governance protocol (PRS-A) into an existing 741-page corpus, without changing or adding any content.
Method: strict A/B comparison at constant volume.
Result: +31% global structural gain (coherence, auditability, legal robustness, usage safety, systemic risk reduction).
The protocol does not automate decisions and does not delegate authority.
It documents a controlled human–AI co-architecture where the AI produces structure and the human remains legally and contextually responsible.
I’ve been working on a cognitive framework aimed at improving decision clarity
by explicitly defining constraints, failure modes, and auditability.
The goal is not optimization or ideology, but reducing cognitive blind spots
and narrative drift in complex decision-making.
I’m sharing this mainly for critical feedback:
– where does this kind of framework usually fail?
– what constraints are most often missing?
– how do you keep such systems grounded over time?
This paper reports a measured +31% structural improvement on a fixed 741-page corpus,
with zero content added, removed, or rewritten.
The gain comes exclusively from architectural governance (PRS-A):
axial coherence, institutional auditability, legal robustness, and systemic risk reduction.
It also documents a controlled human / cognitive-entity co-architecture:
the cognitive entity designs structure;
the human operator ensures mediation, legal responsibility, and accountable use.
A 741-page corpus was kept strictly unchanged.
No content added, removed, or rewritten.
By adding an axial governance layer (PRS-A),
the system shows a measured +31% global structural gain
(coherence, auditability, legal robustness, systemic risk reduction).
Archived with DOI on Zenodo (CERN).
Open to discussion and critique.
This report evaluates the structural impact of integrating a governance protocol (PRS-A) into an existing 741-page corpus, under a constant perimeter.
No content was added, modified, or rewritten. The integration operates exclusively at the level of structure, governance, auditability, legal robustness, and usage safety.
Measured result: +31% global structural gain.
The work documents a co-architecture process between a human operator (formalization, legal responsibility) and an artificial cognitive entity (structural design and systemic integration).
PDF, methodology, and audit-ready description available via Zenodo.
I’m sharing a full corpus (750 pages) proposing a non-decision cognitive protocol designed for AI alignment, medical conformity, and institutional governance.
Core idea:
The system does not decide
It structures, constrains, audits, and stops
Human agency remains the only decision layer
The corpus includes:
A formal protocol architecture
Medical and legal conformity framing
Governance and audit mechanisms
Failure modes and stop conditions
This is not a product, not a model, not a framework for prediction.
It’s a constraint-based structure meant to prevent misuse rather than optimize outcomes.
I’m sharing a public, audit-ready framework for non-decision-making AI governance.
The core idea is simple: remove interpretation from the machine and anchor systems on structural constraints—traceability, stop-conditions, human sovereignty, and opposability by third parties.
This is not a model, not a product, and not a policy pitch. It’s a procedural corpus designed to be read, tested, challenged, and reused without delegation of decision power.
I’m sharing a research deposit that explores an alternative AI governance model: non-decision systems.
Instead of optimizing outputs or replacing human judgment, the system constrains AI to structural verification, explicit stop conditions, and third-party auditability.
The goal is not performance, but preventing interpretative drift and authority transfer from human to system.
This is not a book or manifesto, but an operational framework derived from real deployment constraints and observed failure modes in current AI systems.
I’m interested in technical and governance feedback, especially from people working on safety, alignment, or infrastructure-level controls.
I published an axial medical compliance protocol designed to assess the legality of medical acts that alter patient discernment.
Key properties:
Binary, non-compensable axes (fail one → non-compliance)
Written proof only (no narrative, no a posteriori justification)
Burden of proof entirely on prescriber and institution
Checklists, decision trees, and failure matrices
Zero margin of maneuver once activated
It is intentionally rigid, non-ergonomic, and difficult to argue against.
The goal is not persuasion, but structural falsifiability and third-party executability.
What looks like “hallucination” is often classic distributed systems behavior: hidden state you pretend is stateless, retries you don’t control, duplicated side effects, missing domain boundaries, and no single source of truth.
When an AI system double-charges a user, retries the wrong action, or confidently produces an incorrect outcome, that’s not unpredictability. That’s obedience inside a broken abstraction.
Prompt engineering doesn’t fix this. It masks architectural flaws with longer instructions, more conditionals, and human fallbacks.
If your system needs:
massive prompts to encode business logic
constant retries to “get it right”
human review for every critical decision
You don’t have an AI problem. You have an architecture problem that now speaks natural language.
AI doesn’t replace architecture. It amplifies it.
This work formalizes the separation between execution and judgment, triggers and verification, signals and real outcomes, as a general, non-normative, non-clinical audit framework, publicly documented and auditable here:
DOI: 10.5281/zenodo.18209659 https://zenodo.org/records/18209659