Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

New coverage never explicitly mentions this, but there are two camps at odds with each other:

In one camp, we have people at organizations like Microsoft/OpenAI, Google, Facebook, and maybe Baidu that have successfully developed and trained large-scale AI systems with hundreds of billions to trillions of trained parameters. People in this camp are not too worried about societal risks, but I wonder if it's because to them the rapid improvement in AI capabilities look like a shiny rainbow with a big "pot of gold" (money, fame, glory, etc.) on the other side.

In the other camp, we have people at other organizations, including every academic institution, who have yet to develop and train a large-scale AI system with hundreds of billions to trillions of trained parameters. People in this camp are writing open letters about societal risks. These people sound sensible, but I wonder if they're worried because the rainbow is protected by a giant wall, and they see themselves as being outside that wall (e.g., they lack the budget).

--

EDITS: Replaced /pot of gold/ with /"pot of gold" (money, fame, glory, etc.)/, which better reflects what I mean.



> In one camp, we have people at organizations like Microsoft/OpenAI, Google, Facebook, and maybe Baidu that have successfully developed and trained large-scale AI systems with hundreds of billions to trillions of trained parameters. People in this camp are not too worried about societal risks, but I wonder if it's because to them the rapid improvement in AI capabilities look like a shiny rainbow with a big pot of gold on the other side.

Maybe I am too naive, but they may not trying to do it only for "the money", but for the historical achievement (and ego thing). If GPT is as succesfull as we all think it will, Sam Altman will be remembered alongside big names such as Da Vinci and Mozart. The revolution of GPT will be 10 (or 100) times the computer or the internet. I am a bit against/scared of IA (check my post history), but having the opportunity to become the father of one of the most (if not the most) imoprtant revolution in centuries... I am not sure I'd say no even if it has ethical issues. We are all humans after all...

[] If it is worth to be part of an encyclopedia if there aren't any humans left to read it it's another discussion ;-)


100% agree. By "pot of gold" I mean money, fame, glory, etc. I edited my comment to reflect as much. Thanks!



I'd work on it, simply because it's A. amazing tech, and B. inevitable that if I don't, somebody else would.

I'm trying to build a custom model service for SMBs to create ai agents around their lbs etc... major learning curves though.


Isn't this just like the situation in 2014 where academics were arguing that 'gain-of-function' experiments were dangerous[0], but big government labs and pharma companies were too excited about the research and treatments they could produce?

[0] https://www.cidrap.umn.edu/avian-influenza-bird-flu/commenta...


That's only the $economic view--there are other camps. The one point worth distinguishing is that I'm not immediately concerned about AI becoming sentient and wiping out humanity. I'm more concerned about humans misusing it. Like nuclear research, it's about the applications not the science.


>(e.g., they lack the budget)

That should be "i.e.,".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: