AI will be a forcing function that pushes people to go meet in real life to do stuff together.
Even if everything online is fake, events are not. So if people say they’re going to show up somewhere, there must eventually be a moment of truth. And then you can form high trust private group chats to keep talking together.
It may be hard for the current generation of chronically online people to adjust to that new reality, but the next generation of kids growing up can get used to this now, and eventually socializing in person will be natural again and the internet is for bots and weirdos LARPing as something they’re not.
Maybe, but the small groups that form out there in the real world will each be much smaller than the large group that stays and gets jerked around by the bots.
The large group will have to endure the manipulations that we've come to know and hate from the internet, but they'll also be better coordinated than the small ones. They'll vote together, buy the same sorts of things, have an outiszed influence on the global conversation... They'll define the de facto majority opinion whether or not they actually are a majority and whether or not it's authentically their opinion.
I don't think that's a good outcome. We need ways to get on the same page en-masse, if only to counteract the harms caused by whichever highest-bidder is currently using an AI horde to control the other group. Besides, we should save them from this abuse for their sake, if not for ours.
The internet is worth fighting for, if we abandon it entirely we'll be forever at a disadvantage against those who would use it to manipulate.
Why are we worried about vulnerabilities in code when AI powered social engineering will make it fast, easy, and even fun to find vulnerabilities through human interaction, faster and more deeply than ever?
That will not work as cleanly as you described once a lot of code has been committed to the code base. You cannot just blow away an entire working code base and start over just because an LLM is struggling to make a feature work with existing architecture.
This happened on every single greeenfield project that I've started with AI, no matter how rigorous process I've had defined.
And it's not just easier because it's cheap, it's easier because you're not emotionally attached to that code. Just let it produce slop, log what worked, what didn't, nuke the project and start over.
It's first party, so you get a look under the covers at how things were supposed to work. But the later server emulators are undoubtedly better by any realistic consideration, 1990s C code is not something you'd want to expose to the internet and any design decisions that made sense on era hardware and POSIX APIs are not really applicable once multicore and fast(er) storage became pedestrian.
If you want an end to end automation solution, you have to automate everything, not just high frequency tasks. It’s not acceptable to just say “oh you can automatically deploy a new site but first you have to register an account and buy a little domain”. The user command is “deploy site, right NOW”.
No, it is a realistic approach to a problem which has been facing many places elsewhere in the world where it has been solved using engineering. It is far more apt to call climate doom predictions statements of religious faith given the history of engineering solutions to climate-related problems and the close resemblance of climate doom preachers to those deriving their prophecies from scripture.
Here's a few books on the subject which might be of interest for those who want to widen their view on the ever-changing climate. All of them have in common that they do not deny the climate is changing nor that human activities influence how it changes. Where they differ from the doom narrative is that they approach climate change in the way humans have dealt with other environmental problems to lessen or negate their impact instead of by preaching some grand narrative on how society should be run to avoid catastrophe.
Apocalypse Never: Why Environmental Alarmism Hurts Us All by Michael Shellenberger
You may be confusing this book with the 48 Laws of Power which is absolutely a book no one on hackernews should read because we have enough people in the tech industry scheming to get the upper hand on society as is.
reply