Hacker News new | past | comments | ask | show | jobs | submit login

We live in a very confusing, transitory time. Right now there's a large chunk of people who are convinced that we are a few years away from extinction if we do not ration out GPUs and bomb illegal clusters of compute, while at the same time there's a large chunk that believes that a car can never navigate a single-lane one-way tunnel safely without a driver (ie, The Boring Company). Absolutely wild.



"Convinced that we are a few years from extinction," is an exaggeration of the pessimistic/alarmed end of the spectrum of opinion.

All the forecasts I've seen have the period of danger spread out over at least 20 or 30 years, and no one claims to understand the topic well enough to be able to tell whether a particular model is dangerous even if given complete freedom to examine the source code and interview the creators.

Our inability to predict is in large part the basis of the pessimism/alarm because it means that the leading labs won't know when to stop.


Exactly. And here is an example of that inability in the much clearer case:

"How many years after 1945 ... would it take for the Soviet Union to get the bomb? Framed this way, it was a much easier question than forecasting the advent of AGI: The Americans were concerned with only one adversary and the development of a single technology which they already knew all the details of. American predictions of Soviet proliferation offer a highly constrained case study for those who would undertake technological forecasting today."

https://asteriskmag.com/issues/03/how-long-until-armageddon


> "Convinced that we are a few years from extinction," is an exaggeration of the pessimistic/alarmed end of the spectrum of opinion.

It really isn't. There are people quitting their jobs, protesting AI companies, and inducing panic attacks in children over their irrational fears:

https://www.wired.com/story/pause-ai-existential-risk/


No, even the more-panicked wing is not betting the proverbial house on AI killing us within the next few years, although the possibility is there. The "don't expect to live much longer" sentiment is based on a probability horizon that extends over decades.


I've spoken to people in the article. They expect doom in <1 year.


Can you give a name so I don't have to read a long article?

It is easy to find people who believe that it is possible (with some low probability) that AI will end the human race in the next 12 mo. (I'm one of them.) I haven't seen anyone however put most of their probability mass in the next 12 or 24 mo, and I'd like to find out what their reasoning is.


Sure, first name in the article I posted, Joep Meindertsma. They have an open discord you can join, but fair warning that it's entirely panicky people lamenting their inevitable doom.


I just don't believe that unless you have specific quotes you can point to that indicates this.


> a car can never navigate a single-lane one-way tunnel safely without a driver (ie, The Boring Company).

I think the criticisms of the Boring Company are more "small diameter tunnels are a bad way to build high-throughput underground infrastructure, and autonomous cars will not solve that"


"small diameter tunnels" + "autonomous cars" kind of sounds like a subway train.


Sure, but have you ever been on the London Tube and noticed it's notoriously small and uncomfortable in parts? Have you also noticed how windy some stations get because the small trains fill most of the tube and act like an air piston?

Those tunnels are 12 feet diameter. Boring Co's tunnels are 12 feet diameter.

A modern subway system needs considerably larger tunnels.


Just add tracks, weld the cars together, remove the driving seats, replace the fancy NNs by vintage fuzzy logic and you're there!


I've only seen grifters, hysterics, or the ignorant talking about existential AI risk. A lot of people just nods their heads and go along, but I don't think there are many that have really thought about it (and understand what modern AI is) that really belive the risk.

It's more like political polarization where people dwell on something they don't agree with to the point where they treat it as if the world is going to end because of it, instead of in context. AI is more like the political party they don't like winning, potentially undesirable from a certain point of view but they see it as "existential".

I think it's important to see it in this light, instead of trying to actually debate the points raised, which are absurd and will only give credibility where none is due.


> instead of trying to actually debate the points raised, which are absurd

https://en.m.wikipedia.org/wiki/Appeal_to_the_stone

“Appeal to the stone, also known as argumentum ad lapidem, is a logical fallacy that dismisses an argument as untrue or absurd. The dismissal is made by stating or reiterating that the argument is absurd, without providing further argumentation. This theory is closely tied to proof by assertion due to the lack of evidence behind the statement and its attempt to persuade without providing any evidence.

Appeal to the stone is a logical fallacy.”


> I've only seen grifters, hysterics, or the ignorant talking about existential AI risk.

Which category would you put Geoffrey Hinton in? https://www.utoronto.ca/news/risks-artificial-intelligence-m...


hysterics. Remember when he said we should stop training radiologists, that’s aged pretty poorly given there’s actually a need to have more:

https://www.rsna.org/news/2022/may/global-radiologist-shorta...

https://mindmatters.ai/2022/08/turns-out-computers-are-not-v...


So because he made one(+) bad prediction, all future predictions are inherently poor? It seems that may be a tad of a leap? Where is the line between "poor prediction" and "hysterics"?

"Deep learning will replace radiologists within 5 years" isn't as inherently crazy as it probably seems in hindsight. Sure, it didn't pan out, but I know a vet who works for an Ai company classifying scans for training and he says first hand the models have gotten a lot better. That said, he's still not worried about being replaced.


When the prediction is so bombastic as to prescribe a near future without doubt and be completely wrong then yes, that would be hysterics in my book.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: