Hacker News new | past | comments | ask | show | jobs | submit login

Maybe being shocked means that the person talking about the subject is misrepresenting it, because they themselves don't understand the arguments and are inadvertently projecting.

For example, Ray Kurzweil would disagree about the dangers of AI (He believes more in the 'natural exponential arc' of technological progress more than the idea of recursively self improving singletons), yet because he's weird and easy to make fun of he's painted with the same stroke as Elon saying "AI AM THE DEMONS".

If you want to laugh at people with crazy beliefs, then go ahead; but if not the best popular account of why Elon Musk believes that superintelligent AI is a problem comes from Nick Bostrom's SuperIntelligence: http://smile.amazon.com/Superintelligence-Dangers-Strategies...

(Note I haven't read it, although I am familiar with the arguments and some acquaintances tend to rate it highly)

But then that's precisely the point: Bostrom is a philosopher. He's not an engineer, who builds things for a living, or a researcher, whose breadth at least is somewhat constrained by the necessity to have some kind of consistent relationship to reality. Bostrom's job is basically to sit and be imaginative all day; to a good first approximation he is a well-compensated and respected niche science fiction author with a somewhat unconventional approach to world-building.

Now, don't get me wrong -- I like sf as well as the next nerd who grew up on that and microcomputers. But it shouldn't be mistaken for a roadmap of the future.

I'm not sure it should be mistaken for philosophy either.

Bostrom's doesn't understand the research, he doesn't understand the current or likely future of the technology, and he doesn't really seem to understand computers.

What's left is medieval magical thinking - if we keep doing these spells, we might summon a bad demon.

As a realistic risk assessment, it's comically irrelevant. There are certainly plenty of risks around technology, and even around AI. But all Bostrom has done is suggest We Should Be Very Worried because It Might Go Horribly Wrong.

Also, paperclips.

This isn't very interesting as a thoughtful assessment of the future of AI - although I suppose if you're peddling a medieval world view, you may as well include a few visions of the apocalypse.

I think it's fascinating on a meta-level as an example of the kinds of stories people tell themselves about technology. Arguably - and unintentionally - it says a lot more about how we feel about technology today than about what's going to happen fifty or a hundred years from now.

The giveaway is the framing. In Bostrom's world you have a mad machine blindly consuming everything and everyone for trivial ends.

That certainly sounds like something familiar - but it's not AI.

> Bostrom is a philosopher

That is my main concern about people writing about the future in general. You start with a definition of a "Super Intelligent Agent" and draw conclusions based on that definition. No consideration is (or can be) placed on what limitations AI will have in reality. All they consider is that it must be effectively omnipotent, omnipresent and omniscient, or it wouldn't be a super intelligence, and thus not fall into the topic of discussion.

which right now is (and imo will continue to be) that you need a ton of training examples generated by some preexisting intelligence.

Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact