This. China is a much larger economy and by far the single largest car market in the world - selling more vehicles domestically than the EU and USA combined.
That being said just like Germany they're largely export-oriented and if local demand ever weakens, they'll struggle all the same.
> In 2026, I’m not sure the software engineering industry will survive another decade.
Due to a text predictor?
I'm a daily user of the most recent Claude and while it's amazing at presenting other people's knowledge and reducing cognitive load by filling in the gaps, it's still just a machine that predicts text and that is a limitation which won't be overcome in this generation of such tools which, including research demonstrations, are close to a decade old already.
To me the main issue is that investors are not aware of these limitations and will keep pouring money into this way beyond everyone's breaking point. But really that's a failing of the world's economic system, which relies too much on their whims.
Even if it's "just" a text predictor, it already outperforms the average person in certain domains, particularly software engineering. With all the recent advances in agentic systems like Claude Code and OpenCLAW, these text predictors can iterate and debug faster than the average human. Looking ahead at the next decade, I totally agree with Sean's view here.
> I'm a daily user of the most recent Claude and while it's amazing at presenting other people's knowledge and reducing cognitive load.
'Presenting other people's knowledge' is enough to get the job done when that knowledge encompasses the entire internet.
These are strong claims and I would want to see equally strong evidence for them.
My experience is that it's really darn good at producing text, but it's not a logic engine - it's not designed to be one and even the most recent versions make mistakes which indicate it's not actually thinking.
Well it’s not just a “text predictor” is it? You can pretend that today we still only have ChatGPT 2 and that there is only pre-training on a large corpus of information, but that simply isn’t true?
It is, by definition, design and architecture a system that produces believable text.
Here's a task to give it which pulls the veil right off:
Ask it to add tests to a piece of code where code coverage is 100%, but it doesn't actually test functionality 100%. You'll start seeing nonsense sooner or later.
A leading theory in neuroscience is that human brains are fundamentally prediction machines too, constantly predicting sensory input, other people’s behavior, the next word in a sentence. “it’s just prediction” isn’t the gotcha you think it is. Prediction and attention turn out to be a surprisingly powerful foundation for intelligence.
The “just a text predictor” framing was fair a couple years ago but hasn’t kept up. Current models can genuinely identify untested edge cases even when coverage is 100%. You're definitely using the latest and greatest models?
The architecture started as next-token prediction, sure, and yes, human judgment is still required, but that judgment is being captured and integrated too.
Every time millions of people use these models, their feedback feeds the next round of improvements.
Also, these models don’t need to replace your best engineers to be disruptive. They just need to outcompete the bottom of the bell curve. For a lot of junior-level work, we’re already getting close.
> You're definitely using the latest and greatest models?
Claude 4.6 opus high, specifically.
As for human brains: every self respecting neural networks 101 course is prefaced with "don't draw analogues to the human brain". And for good reason. Natural neural networks are fundamentally way more complex at every scale.
Also the brain indeed predicts, but also verifies and learns from the predictions. LLMs don't do that - not in real time at least.
I agree that the software industry should expect a major upheaval. Developers won’t be replaced by LLMs, but it’s extremely likely that software as a product will become less valuable. You can already vibe code solutions you would’ve otherwise had to pay for. As tools come out that take advantage of this, it’s going to just get easier to spin the app you want up instead of paying for someone else’s. Which is pretty cool if you aren’t already a software developer.
Yes, maybe in a decade all of the most popular LLM are dropped because lack of income. Only companies that learned how to use their own models and local optimized software using very specific hardware could still run them for very specific tasks
Accounting. In my region tax law can suddenly change outside of the usual annual update cycle and, like everywhere, is riddled with edge cases and unclear interpretations.
Most importantly there's often a period of general uncertainty and adoption, during which the new law is already in force, but LLMs will rely on whatever there was previously.
Most people find this job stressful and boring, but the same can be said about software engineering. Regular people pay money to have it dealt with.
Overall I think there will always be demand for handling the messiness of the real world and humans have the upperhand here because they learn as they go, not via release cycles costing a sizeable sum and taking months.
If LLMs can handle software engineering well enough to no longer need engineers, accounting will be solved by the same model version.
Seriously if the future manifests, all of these standard effort based jobs would become redundant...
The issue with outdated information is way overstated, it'd just add the current rules to the context when evaluating and be done with it. We're already at 1 million context size... That's enough for a lot of rules - and the number will likely go higher as time progresses
It's a dialect (superset) of ECMAScript based on the never-released 4th edition draft, which was in development at the time C# was first released, so these similarities are no accident.
I refuse to have my browser fingerprinted as a "trusted device" because part my bank is just bad at it.
reply