Hacker News new | past | comments | ask | show | jobs | submit login

I'm not saying climate change is not a giant problem, I'm saying it's unlikely to eradicate our species.

I believe it is dangerously shortsighted to base AI threat estimation on current self-driving performance; the two fields of advancing AI cognitive abilities and improving selfdriving are not sufficiently connected for that IMO.

We're also putting a lot of focus on system designs that are useful to us, instead of directly building potentially threatening architectures (online learning/longterm memory/direct connection + feedback from physical reality), but those could already be within our grasp technologically (maybe?).

What do you think about the 3 points I raised?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: