> "If you’re going to really exploit the capabilities of these new tools, you need to be operating at the top of your game. You’re not just responsible for writing the code—you’re researching approaches, deciding on high-level architecture, writing specifications, defining success criteria, designing agentic loops, planning QA, managing a growing army of weird digital interns who will absolutely cheat if you give them a chance, and spending so much time on code review."
I know this paragraph is supposed to be encouraging, but it makes me wonder again what the actual goal of this entire AI enterprise is supposed to be.
"Less work" or "easier work" would make superficial sense, but in a society where people are in constant competition and derive both their self worth and their basis for living from work, both are effectively anti-goals. And so we get articles like this trying to offer comfort by saying that work will still be draining and challenging in the future.
So if not less work, then "more productivity", i.e. we can produce more software in a shorter amount of time (but with the same mental load). But as others have said, this was never the bottleneck.
I know this paragraph is supposed to be encouraging, but it makes me wonder again what the actual goal of this entire AI enterprise is supposed to be.
"Less work" or "easier work" would make superficial sense, but in a society where people are in constant competition and derive both their self worth and their basis for living from work, both are effectively anti-goals. And so we get articles like this trying to offer comfort by saying that work will still be draining and challenging in the future.
So if not less work, then "more productivity", i.e. we can produce more software in a shorter amount of time (but with the same mental load). But as others have said, this was never the bottleneck.