Hacker News new | past | comments | ask | show | jobs | submit login

I don't see any "fear-mongering" in this announcement?



The creation of what is essentially an ethics committee for a technology that doesn't even exist yet? With people such as Elon Musk on board who have publicly said 'AI is our biggest existential threat' ?

Additionally the second paragraph:

We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.

This infers they think AI will be used for hostile means. Such as wiping out the human race maybe? It is just un-informed people making un-informed decisions and then informing other un-informed people of said decisions as if they were informed.


Sutskever, Schulman, Karpathy and Kingma are experts in machine learning.

And yes, AI will definitely be used for all sorts of purposes including hostile means. Just like anything else, really. Financial manipulation, spying, intelligent military devices, cracking infrastructure security, etc.

These are realistic concerns, we shouldn't fall for the Skynet red herring. We can have problems with ethical AI use, even if it's not a self-aware super-human superintelligence.


I hope I'm wrong, but given the composition of the donors I'd be surprised if they really put much scrutiny on near-term corporate/government misuse of AI, apart perhaps from military robots. There are definitely interesting ethics questions already arising today around how large tech companies, law enforcement, etc. are starting to use AI, whether it's Palantir, the FBI, Google, or Facebook, so no argument that it's a timely subject at least in some of its forms. It'll be interesting to see if they get into that. I'd guess they probably want to avoid the parts that overlap too much with data-privacy concerns, partly because a number of their sponsors are not exactly interested in data privacy, and partly because the ethical debate then becomes more complex (it's not purely an "ethics of AI" debate, but has multiple axes).


I share your concerns. It also worries me that the brightest ML researchers choose to work at companies like Facebook, Google and Microsoft instead of public universities. One reason is probably that academia and public grants are too sluggish to accommodate this fast paced field. Another is that these companies have loads of data that these researchers can use to test their ideas.

The downside is that much of the research is probably held secret for business advantages. The public releases are more of a PR and hiring strategy than anything else in my opinion. By sending papers to conferences, Google's employees can get to know the researchers and attract them to Google.

Others say there's nothing to worry about, Google and Facebook are just today's equivalent of Bell Labs, which gave numerous contributions to computer technologies without causing much harm.


I doubt they are strictly targeting "strong AI", and a lot of the things we use and call AI right now also benefit from open work and discussion. Just because it is just "Machine learning" doesn't mean it isn't used for questionable or bad purposes.

EDIT: I have to agree with _delirium's skepticism towards them doing much in that regard though.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: