Hacker News new | past | comments | ask | show | jobs | submit login

>> including further entrenchment of existing inequalities, manipulation and misinformation, and loss of control of autonomous AI systems

1. further entrenchment of existing inequalities ... we were doing this LONG before AI came along. This isnt a new problem, it's not like AI is the reason some groups are under-represented in tech. (and it's not like tech itself is to blame alone).

2. manipulation and misinformation... Go back in history and the progressive movement was all in on eugenics in the early 1900's. Alex jones is a thing today. This was already a problem.

3. loss of control of autonomous AI systems ... there's the doomer I know and love! If your an AIG doomer go read I, Pencil. Look at the guy who tried to make a toaster from nothing (and bread too). Read up about how brittle the entire power system is. Things are falling apart because we're too cheap to put the people and resources into maintaining them. Unless AGI wants to die in the process it's going to be a long time before a world exists where it doesn't need people.

Seriously, climate change, nuclear threats, a bad screw up in bio research... way more likely to kill us all than for us to get to AGI.

AI today is about as useful as a bitcoin in a black out. It's interesting but it doesn't DO anything other than waste power and resources.

EDIT: it was so worrying that all the main safety folks quit openAI and no longer want to save us /s




I can't understand the "x is bad so y isn't a concern" line of argumentation. Or the "x was already a problem so why are we wasting time worrying about x * z" thing. Can't more than one thing be bad at a time? can't bad things be made worse?


> Can't more than one thing be bad at a time? can't bad things be made worse?

The massive consumption of resources on AI research is a real material threat that AI safety people DO NOT TALK ABOUT. What if the real paper clip problem is the power and resources we waste trying to get to AI?

And because if it turns into a global or multi national LHC like effort all those PHD's dont get valley rich.

You're not wrong, but they aren't right because they aren't putting the issues that would impact them first.


> AI today is about as useful as a bitcoin in a black out. It's interesting but it doesn't DO anything other than waste power and resources.

AI today is useful as an enhancer to human productivity. It can sometimes be a real time-saver in the hands of a trained professional - at least in fields such as software development where the correctness of an answer can be validated quickly and safely, and the harm caused when the AI gets it wrong is unlikely to be significant. [0] But in that use case it is supplementing humans, not replacing them - without a trained experienced human in the loop, the AI can’t do anything

[0] It is possible the AI might get it wrong in a subtle way which no human notices and then blows up badly in production - but humans already do that anyway - and we already have tools to try to catch some of those problems (security scanners, memory leak detectors, etc) - and it is possible some of those tools might perform even better with AI assistance


Historically the much bigger risk is new technology ending up fully controlled by a single country.

Much of the history of the world's revolutions can be chalked up to that.

Bronze weapons, horsemanship, chariots, gunpowder, ship-building, castles, siege tech, engines, tanks, submarines, warplanes. Each of those catalyzed at least a new regional power and in many cases a new world power. Those transitions were generally quite deadly and unpleasant to live through.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: