Even this post by Martin Fowler shows he's an aging dinosaur stuck in denial.
> I’ve often heard, with decent reason, an LLM compared to a junior colleague. But I find LLMs are quite happy to say “all tests green”, yet when I run them, there are failures. If that was a junior engineer’s behavior, how long would it be before H.R. was involved?
I don't know what "LLM's" he's using but I just simply don't get hallucinations like that with cursor or claude code.
He ends with this:
> LLMs create a huge increase in the attack surface of software systems. Simon Willison described the The Lethal Trifecta for AI agents: an agent that combines access to your private data, exposure to untrusted content, and a way to externally communicate (“exfiltration”). That “untrusted content” can come in all sorts of ways, ask it to read a web page, and an attacker can easily put instructions on the website in 1pt white-on-white font to trick the gullible LLM to obtain that private data.
Not sure why he is re-iterating well known prompt injection vulnerability, passing it off as a general weakness of LLM's that applies to all LLM use when that's not the reality.
Even this post by Martin Fowler shows he's an aging dinosaur stuck in denial.
> I’ve often heard, with decent reason, an LLM compared to a junior colleague. But I find LLMs are quite happy to say “all tests green”, yet when I run them, there are failures. If that was a junior engineer’s behavior, how long would it be before H.R. was involved?
I don't know what "LLM's" he's using but I just simply don't get hallucinations like that with cursor or claude code.
He ends with this: > LLMs create a huge increase in the attack surface of software systems. Simon Willison described the The Lethal Trifecta for AI agents: an agent that combines access to your private data, exposure to untrusted content, and a way to externally communicate (“exfiltration”). That “untrusted content” can come in all sorts of ways, ask it to read a web page, and an attacker can easily put instructions on the website in 1pt white-on-white font to trick the gullible LLM to obtain that private data.
Not sure why he is re-iterating well known prompt injection vulnerability, passing it off as a general weakness of LLM's that applies to all LLM use when that's not the reality.