You’re raising the key issue: it’s not whether AI can produce an answer, it’s whether an organisation can rely on it, and who is accountable when it fails.
A few points I mostly agree with, with one nuance:
Humans are in the loop today because accountability is clear. You can coach, discipline, replace, or escalate a person. You can’t meaningfully “hold an API responsible” in the same way.
But companies don’t always solve reliability with a person reviewing everything. Over time they often shift to process-based controls: stronger testing, monitoring, audits, fallback procedures, and contractual guarantees. That’s how they already manage critical dependencies they also can’t “fire” overnight (cloud services, core software vendors, etc.).
Vendor lock-in is real—but it’s also a choice firms can mitigate. Multi-vendor options, portability clauses, and keeping an exit path in the architecture are basically the equivalent of being able to replace a bad supplier.
High fault-tolerance domains will keep humans involved longer. The likely change is not “no humans,” but fewer humans overseeing more automated work, with people focused on exceptions, risk ownership, and sign-off in the most sensitive areas.
So yes: we need humans where the downside is serious and someone has to own the risk. My claim is just that as reliability and controls improve, organisations will try to shrink the amount of human review, because that review starts to look like the expensive part of the system.
> The likely change is not “no humans,” but fewer humans overseeing more automated work, with people focused on exceptions, risk ownership, and sign-off in the most sensitive areas.
The problem being that AI can scale faster, and compound problems exponentially, than any human can review or oversee things. There need to be better control mechanisms.
A world without friction didn’t set us free—it just made life too smooth to feel real.
As AI erases uncertainty, humans go hunting for something that refuses to behave.
And in the age of algorithms, the last sacred ritual isn’t in a church—it’s in a football stadium.
AI does not erase uncertainty. It's non-deterministic/probabilistic, it's internal state can't be meaningfully scrutinized, and the ability to falsify or convincingly mislead has never been easier. AI promulgates uncertainty.
You make a fair observation: AI has not removed uncertainty. Its probabilistic nature, its opaque internals, and its capacity to generate convincing falsehoods all mean that uncertainty is not reduced, but redistributed.
What I was trying to explore is how the locus of uncertainty is shifting. Instead of confronting the unknown in the world, we now confront it inside the systems we build—systems we struggle to fully inspect or explain. The unknown hasn’t vanished; it has moved closer to us, and become harder to hold.
This is where football and religion feel relevant. They are places where uncertainty is not a problem to be solved, but an experience to be shared—ritual, suspense, allegiance, the goal nobody expected. They give form to the unpredictable in a way models never quite can.
So yes, AI promulgates uncertainty. The challenge is not to abolish it, but to decide how we live with it—and where we gather around it.
You're spot on, figassis — the idea of "job security" as we've known it is rapidly becoming outdated. Talent isn't going away; rather, it's becoming fluid, accessible, and globally distributed. Your vision of a Slack-meets-Fiverr platform, where everyone collaborates and offers skills on-demand, aligns perfectly with this shift. Political hurdles may indeed slow things down (let's hope they don't), but there's hope. Historically, technology has a way of breaking down barriers faster than they're built. The real challenge — and opportunity — is designing tools and systems that facilitate trust and seamless collaboration in this global freelance ecosystem.
Nice, thank you, but „Le Locle, Geneva“ as an output is wrong. Le Locle is a municipality in the Canton of Neuchâtel in Switzerland. Even for GPT-4 there is room for improvement…
I do not agree that AI will necessarily lead to a future dystopia. While it's true that AI has the potential to disrupt many industries and potentially lead to job losses in some areas, it's also important to remember that AI can be used to improve people's lives in many ways. For example, AI can be used to help diagnose and treat diseases, improve crop yields and food security, and improve the efficiency of various industries. Additionally, AI can be used to help tackle some of the global challenges you mentioned, such as poverty, war, and disease.
Furthermore, it's important to remember that the development and deployment of AI is not inevitable. It's up to us as a society to decide how we want to use this technology, and to ensure that its benefits are widely distributed. By working together, we can use AI to improve people's lives and create a better future for everyone.
Not the op but I believe you are right. That answer is very much the chat gpt answer structure. Makes me wonder if stylometry can be applied to the default chat gpt style to detect generated responses
A few points I mostly agree with, with one nuance:
Humans are in the loop today because accountability is clear. You can coach, discipline, replace, or escalate a person. You can’t meaningfully “hold an API responsible” in the same way.
But companies don’t always solve reliability with a person reviewing everything. Over time they often shift to process-based controls: stronger testing, monitoring, audits, fallback procedures, and contractual guarantees. That’s how they already manage critical dependencies they also can’t “fire” overnight (cloud services, core software vendors, etc.).
Vendor lock-in is real—but it’s also a choice firms can mitigate. Multi-vendor options, portability clauses, and keeping an exit path in the architecture are basically the equivalent of being able to replace a bad supplier.
High fault-tolerance domains will keep humans involved longer. The likely change is not “no humans,” but fewer humans overseeing more automated work, with people focused on exceptions, risk ownership, and sign-off in the most sensitive areas.
So yes: we need humans where the downside is serious and someone has to own the risk. My claim is just that as reliability and controls improve, organisations will try to shrink the amount of human review, because that review starts to look like the expensive part of the system.
reply