AI systems learn patterns. They don’t learn principle.
The Faust Baseline™ is an experiment in moral infrastructure — a correction layer that applies constitutional principles to AI dialogue. It doesn’t filter content or rewrite outputs; instead, it structures responses through the same lens that governs human conduct: truth, accountability, and rule of law.
The idea came from frustration with how quickly AI models drift into bias or flattery when pushed. We wanted to see what would happen if an AI had to reason like a citizen, not a mirror.
We used large language models (ChatGPT + Copilot) and layered a rule-based architecture over them — something between a linguistic arbitration engine and a constitutional interpreter. The result is a conversational model that can justify its tone, cite its reasoning, and correct itself when it drifts.
It’s still experimental, but it’s been running daily for months in production settings.
Everything is documented openly here:
https://www.intelligent-people.org
We’d appreciate technical or philosophical feedback — especially from those working in ethics, law, or human-AI alignment.
The Faust Baseline™ is an experiment in moral infrastructure — a correction layer that applies constitutional principles to AI dialogue. It doesn’t filter content or rewrite outputs; instead, it structures responses through the same lens that governs human conduct: truth, accountability, and rule of law.
The idea came from frustration with how quickly AI models drift into bias or flattery when pushed. We wanted to see what would happen if an AI had to reason like a citizen, not a mirror.
We used large language models (ChatGPT + Copilot) and layered a rule-based architecture over them — something between a linguistic arbitration engine and a constitutional interpreter. The result is a conversational model that can justify its tone, cite its reasoning, and correct itself when it drifts.
It’s still experimental, but it’s been running daily for months in production settings. Everything is documented openly here: https://www.intelligent-people.org
We’d appreciate technical or philosophical feedback — especially from those working in ethics, law, or human-AI alignment.