Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: If you fear AI now, were you likely afraid of the Y2K bug & vice versa?
4 points by amichail on May 1, 2023 | hide | past | favorite | 8 comments
Or are these two things quite separate in people's minds?



Y2K was a real (potential) problem which didn't have consequences because a bunch of people did a bunch of work to avoid it.

To give a brief description, dates were commonly stored with two digits for the year to save on disk space because the "19" prefix was assumed. Once 2000 got closer people started to realize this mistake. More similar to what will likely (and "suddenly") be big news around the mid 2030s: https://en.wikipedia.org/wiki/Year_2038_problem

I imagine most people with knowledge of these subjects would consider them to be unrelated.


Not sure how these are even related.

Y2K was a specific technical problem with an obvious solution. The only "issue" was that the whole thing got massively overblown by the media and a tiny minority of preppers. (Who also overreacted, for the media attention they got from it. "Florida Man spends $2,000 on canned vegetables, details at 11.")

I don't think many people "fear" AI, they fear what unethical and non-caring people will do with it in the persuit of money. Personally, my biggest fear is that eventually it will become hard to impossible to know whether the thing I'm reading on the internet was written by a human or a computer. Or I will call someone and have no idea whether the "person" I'm speaking to is a bot or not. I am already frustrated by phone trees that can only be navigated with speech, because they literally can NEVER help me with what I want. My default mode of operation is shouting into the phone, "I want to speak to a human" over and over again until someone's phone rings. Sometimes it even works.

Basically, I imagine whole classes of industries (particularly those that sell financial products, like insurance) are already looking to replace literally all of their call center staff with AI.

AI will be good for many things. But lots of companies and entrepreneurs will try to use it as a substitute for what should be human-to-human communication, and that will make everyone's life anywhere from a little to a lot worse.


Saying "operator" often works.


Y2K was at it's core an infrastructure maintenance problem. A one time investment needed to "reinforce" code to be resilient to a disaster we knew would be coming. For the Y2K bug, we had a pretty good grasp of what we knew, what we didn't know, and what we didn't know we didn't know.

AI is a black box that we can potentially defer extremely significant society altering decisions to. The number of people who can explain how it works is relatively low, and even the leaders in the field agree that there is so much we don't know we don't know about it, it's properties, and its capabilities.

They are extremely different problems.

The Y2K bug was adding shock absorbers to skyscrapers, if AI performs as promised then it can alter the very fabric of our society and not just suggest, but require a redefinition of the relationship between labor class and ownership class people.

AI makes dystopian cyberpunk a very real potential reality. Y2k wasn't much different than an earthquake or a hurricane, particularly one we knew would be coming.


I had to patch a ton of systems for Y2K. The only fear involved was that I would be working long hours and responding to whatever fallout we missed. I am happy to mostly get to skip the Y2K38 bugs and the handful of date related bugs after that. I might just have to set an option [1] on my XFS volumes for my personal nodes.

LLMs and AGI are a big bag of unknowns. I have no idea what glorious and malfeasant things companies, governments and people will do with them. Perhaps they will continue the path of manipulation as currently done on the social media platforms and tuning their algorithms get rage clicks and other aspects of manipulation and misinformation. I have no emotions for it either way as they are just tools. I predict that lawyers will see the most action from the AI fallout especially due to the lack of audit trails and attestation of ingested data sources. Their emotions should be excitement and visions of $$$$$$.

[1] - https://wiki.archlinux.org/title/XFS#Upgrading


I don't see why these would be correlated, personally.

FWIW, Y2K was seen as a bug that needed fixing. While there are many who see that AI as problematic, I don't think it's seen as a "bug" the same way at all.


The horrors of the Y2K bug are so easily forgotten: some web sites displayed "19100" as a date!

Truly the official paranoia over "are we ready for Y2k" was far more bothersome than the actuality of the bugs that needed fixing. For allt he hype about "the world will end" it was easy to see that even in the worst cases we could roll the damned clocks back and continue to get things done.

No one wanted to talk about that because the Apocalypse is so much more fun.

Right now, just the same, people really want to talk about "AI is going to kill us all" but not "AI isn't "intelligent" even with recent advances . The question of "just what is 'intelligent'" is one we want to avoid, lest we are forced to include some higher mammals and exclude some less able humans.

I agree people are seeing the same thing; an opportunity to panic in a fun, safe way, over possibilities that common sense strongly suggests will not have severe consequences.


I was a little worried about Y2K. I was still in school, I didn't know how much was done to prepare for it.

I'm not that afraid of AI. I think before AGI becomes omnipotent and omnipresent (assuming it does at all) we're likely to have burned through most of the resources on the planet that could sustain it anyway and/or there will have probably triggered another World War over diminishing resources.

For sure there will be some bad actors using A.I., and to some extent it will reshape society, but that's true of any technology, so I don't see that as an inherent threat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: