Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Could an AGI ever win a Nobel Prize?
4 points by sampling 10 months ago | hide | past | favorite | 10 comments
Let's assume AGI is achieved in our lifetime. If an AGI makes a groundbreaking contribution, does it deserve the same recognition as a human? Could an AGI ever be awarded a Nobel Prize?



Presumably someone made/sponsored it and would take responsibility. This idea that AI "does things" in a vacuum does not make sense to me and I don't understand why people assume it's the end goal.


>This idea that AI "does things" in a vacuum does not make sense to me and I don't understand why people assume it's the end goal.

It's the end goal for capitalism. Consider Jeffrey Katzenberg's recently stated expectation that AI should make it possible to eliminate 90% of artist jobs in animated films - of course he doesn't expect to make 90% less profits in the bargain. Every industry looking at AI is doing so primarily to be able to eliminate jobs, which entails having AI doing as much as possible without human intervention.


You have not just described a vacuum, though. AI is not going to make animated films apropos of nothing - that idea is fundamentally flawed. Someone has to set it up, someone has to pay for it's electricity, and eventually a human has to review it. If such an AI were writing Nobel Prize theses, my instinct says that the person who curated/selected that theses is the actual party that "discovered" it.


Because AI grifters think training a language prediction model is the same thing as creating Data from Star Trek.


The bar is pretty low for the peace prize. Potentially thinking about doing something not bad is good enough so sure why not.


Nobel laureates are restricted to “persons”. So an AGI would have to be granted personhood first.


Yes, at least in magnitude of what it would contribute. Whether humans are willing to give out the award is a different question.


It seems that many of the responses here are thinking short term. I've seen computing go from machines with 1K of RAM that stored programs on audio cassettes, to cheap computers with gigabytes of ram and terabyte SSDs.

We're just starting to see how far AI can go. As a parent, I've watched my child boot up from a helpless needy bundle of potential, into a late teen. I fully believe AGI is possible with today's technology, if you gave it a body to live in, instead of a simulated world, and put it through the same process.

I fully believe we can build AGI. I believe the Nobel prize will change their criteria over time, but the selection process has always been political, and it may go either way for a generation or so.


That feels like wishful thinking. When I grew up I was told flying cars would be everywhere, since we could make planes and cars just fine! Logical next step, right?

Well... not exactly. The progress of technology has been inspiring, but besides the engineering side of things not much has really changed since the 8086 and Smalltalk days. We're still using computers with roughly the same architecture, limitations and scaling flaws. Nobody has reinvented the wheel (in my lifetime) to obsolete the old one.

I see a lot of flaws in this speculation, even given the benefit of "long term" doubt. For starters, this "created" AI is not a sovereign human; it is the property of whatever private interests built it. In that scenario, your AGI does not "win" the Nobel peace prize. The party that owns the AI would assume responsibility for the findings, for no reason other than the fact that the AI is their property. Even if they disowned it, the only responsible way to hold it accountable is to correlate the AI with it's creator. If you think AI won't be required to be "street legal" in such a future, you're not thinking hard enough.

...and then there's the technical angle. We straight-up cannot engineer a human body. We can try to replicate biological processes in a pragmatic, mechanical fashion, but we cannot 'build' the entirety of a human body. The only way to you to create an AI the way you've described is to destroy a human consciousness and replace it with AI, which is just about the pinnacle of human rights violations attainable. God forbid we make it that far, we now have to out-engineer biology in the same power profile as a human mind. It's a suicide mission in every sense of the word; tangible human lives would be lost in the pursuit of computational nihilism.


As for engineering a body, it's the limitations that are the most important, like foveated vision, where you can't see everything all at once, and you start out not even being able to direct your gaze or focus, or lift your head... I'm convinced all of those first steps are crucial, even if its "body" is teathered to a big pile of gear like power supplies and compute clusters.

As for computer architecture, I've got an idea for that, the bitgrid[1]... It might work (or not)

[1] https://esolangs.org/wiki/Bitgrid




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: