Hacker News new | past | comments | ask | show | jobs | submit login
What if an AI wins the Nobel Prize for medicine? (economist.com)
52 points by pama on June 28, 2021 | hide | past | favorite | 68 comments




The same way we handle photography competitions.

The photographer, even if it’s a monkey, who pushed the button gets the prize, even though they had no involvement in the billions of dollars of R&D that went into CFD and lens design, etc.


I do some generative evolutionary art as an ongoing side project, and sometimes take my art to art shows, galleries, and conferences, and also publish academically on the techniques. It has happened on multiple occasions that someone either said directly to me or loudly to a friend nearby “Oh, well, this was just made by a computer”, as if my contribution was nothing more than turning it on and pressing a button. More or less the same as suggesting a painting was created by the paintbrush.

I don’t mind that some people don’t understand digital art, and I think it’s true that it’s harder to determine what an artist actually does when looking at digital art. But jumping to conclusions about it without understanding the first thing about it is pretty funny.


You guide your computer, interact with it and are involved in the process, no? The camera maker is not, and I think that is what makes the distinction here. If a true AGI woke up one day, decided to study medicine and created a break through all on its own then yes, it deserves the credit. But if an AI is being trained, tested, guided, fixed, debugged, re-written, tuned and fixed again until good results are found, the people doing that work deserve the credit. They are the ones who "know" what they are doing, not the AI.


Yeah exactly, but in fairness I think it can sometimes be legitimately difficult to know who's doing the work. Sometimes the tool maker's contribution really does setup an environment where the next step is inevitable. Scientists have always known that the context of their work - their peers' publications - provide the springboard to 'ask the right question'.

Our tendency to anthropomorphize computers certainly contributes to getting confused about who did the work and who deserves credit, but humans have always had a hard time giving credit where credit is due when multiple people are involved, even outside of cases where contributors are excluded or snubbed. We humans tend to like clean simple narratives about breakthrough geniuses. We don't as much like listening to or trying to explain long histories of incremental improvements by many that culminate in a big change.


OT but would you mind sharing some of it? That sounds really interesting.



https://flickr.com/photos/biv4b/70775796/in/album-576553/

Cool, these remind me of atomic orbitals.


Those are incredible, thank you!


I think the same discussion was about if photography is art.


This has already been settled. PETA tried to enforce a literal monkey's copyright. The verdict was non-humans cannot be assigned copyright.

https://law.justia.com/cases/federal/appellate-courts/ca9/16...


An AI may have appealed that by 2036!


"A monkey, an animal-rights organization and a primatologist walk into a federal court to sue for infringement of the monkey’s shared copyright. What seems like the setup for a punchline is really happening. It should not be happening.”

http://www.slate.com/blogs/moneybox/2015/11/12/a_lawyer_s_de...


>The photographer, even if it’s a monkey, who pushed the button gets the prize, even though they had no involvement in the billions of dollars of R&D that went into CFD and lens design, etc.

Surely if "push the button" is all it takes to produce award-winning artwork, then R&D engineers should be producing it all the time. After all, they have advanced access to the tech from the moment it rolls off the workbench. (I should probably add an /s here...)


"The Nobel Prize is five separate prizes that, according to Alfred Nobel's will of 1895, are awarded to ”those who, during the preceding year, have conferred the greatest benefit to humankind.”" [1]

So either the person who created the AI or no one if they refuse it like in the article. The prize is still selected by a committee and they should IMO pick humans.

[1] https://en.wikipedia.org/wiki/Nobel_Prize


Yes, it would be terrible if the started repeatedly doing the publicity play that Time Magazine started.


From what the Nobel committee says, the prize is for a who, not a what. And it's for excellence, not accidents.

"The Nobel Prize is a celebration of excellence. The Nobel Prize is considered the most prestigious award in the world in its field. It is awarded to 'those who, during the preceding year, shall have conferred the greatest benefit on mankind'."

https://sweden.se/work-business/study-research/the-swedish-n...


> And it's for excellence, not accidents.

It so happens that many, many seminal discoveries in science are accidents. The discovery of the cosmic microwave background is probably one of the more popular examples of this, and certainly didn't get in the way of the discoverers winning the Nobel Prize in Physics.

https://en.wikipedia.org/wiki/Cosmic_microwave_background


sure, but someone had to recognize what it was and do a lot of work beyond that accidental discovery to understand it make it usable.

That work is not a series of accidents.


Same as if a grad student wins the Prize: Credit will go to the tenured manager of the lab where he worked and the grad student might get their name on some related publication.



There are egregious cases in history (e.g., Rosalind Franklin) but nowadays most of modern research is conducted as a group so it's logistically hard to award it to anyone other than the principal investigators of the lab.

It's a bit sad that the original purpose of the prize is lost but that just reflects that we're past the age of lone genius contributors making a difference. No one person really matters. You can replace that grad student or PI with another and the machine will still chug along.


That last part doesn't ring true. It's like saying a championship sports team would just chug along without any particular player or coach.


But you would agree that the sport won't just die if we lose any particular player or coach right?

Likewise research is so competitive/redundant nowadays that there are always multiple labs vying for the same results. If we lose any particular lab we will be set back at best a few months.

(For another example from history, despite the vast contributions by Newton, even if he didn't exist, we would still have discovered calculus at approximately the same time frame)


A Nobel prize is a once-in-a-lifetime achievement. I'd be furious if all my work is credited to my lab head just because his/her name is first on the paper.


Grad student wins, gets beaten with broom so he/she doesn’t let it get to their head. Lab head gets all the credit


State of the art AI is just mechanized labor. We've let marketers who want to make software sound advanced to people that don't understand software go out and reignite all the 50s era what-if chin stroking. This is an embarrassment to the software domain.

We need to do something like (bad example) repurpose that computer science quip and start saying that "AI isn't artificial and it isn't intelligent."


> AI isn't artificial

Lol what? Well it’s not natural/biological intelligence. That quip fundamentally misunderstands the breadth of the ai field.

Additionally, the critique of mechanized labor (regardless of its accuracy) is only relevant to supervised learning and only to the fraction of supervised learning where the datasets are curated not collected. That can’t be equivocated with all of AI in general. Again that’s misunderstanding the breadth of ai research.


Let me save everyone some time:

The artificial/natural distincion is an artificial concept.


I think the claim is that it's not intelligence (yet). It's artificial like tools, the human is still in charge (by tweaking parameters, repeating experiments, etc).

This is bound to change though.


I mean... it's artificial in the same way linear algebra or calculus is artificial


So basically all technology is natural because it is an outgrowth of maths? That waters down the definition of natural to meaninglessness


Well no. "AI" is as much artificial as a variable assignment of x=5 is. It's a construct.

AI isn't a "technology" today. It's literally just linear algebra.


I would rather say it's artificial in the same way any other computer program is artificial.


What if a robot is created that makes other robots, and those robots make other robots, and eventually we lose control of the process? Is that natural or artificial? What if these are "biological robots" in that they are artificially constructed animals? If we can make that leap then I think we might be able to make the same leap for a pure software version of this, no? And what if it program evolved to solve problems that no one intended, and we interact with it in ways that valued it as sentient life? Could it win the Nobel prize then? Just a thought.


You can drop some of that out easily to reduce the mess. It doesn't matter whether something is biological or not, that isn't a valid defining qualification for natural vs artificial.

Once the robots are self-sustaining and self-determining, away from their creator, any reproduction on their part is natural. Once their existence is governed of their own processes, fully independent of eg intentional, controlled human inputs.

This is conceptually absolutely no different than a bug or animal creating a new plant/virus/bacteria/whatever through some behavior (in that case typically inadvertently; however intentional or non-intentional has no bearing on the outcome re natural vs artificial, it doesn't matter whether it's intentional or not).

People get comically high-minded about humans. Whether we create a new virus by accident or an independent robot, once the thing is independent of us, released into the wild so to speak, it's into a natural process that governs it (where in this case natural means, basically, not especially tightly controlled by human dictation). There is a gradient element to all of this, which makes it seem more complex or difficult to figure than it really is; at the edges it's obvious, near the switch-over it's murky.

The important defining factor is the process that the thing is being governed by, not the process under which it was created (or its ancestors & heirs were created). It doesn't matter how a thing is created, that doesn't determine natural vs artificial, it matters what its attributes and capabilities are post creation.

Obviously it's easy to go so far as to claim that all things are natural, since everything is of nature and nothing can exist outside of nature. However the definition is meant to be more precise here, as I think is understood.


The important defining factor is the process that the thing is being governed by, not the process under which it was created (or its ancestors & heirs were created).

I looked through a few dictionary and they mostly agree that artificial just means man-made.


Real AI — fine by me.

Those things we call "AI" these days — nonsense. Award should probably go to the creators of machine-learning approaches which led to Nobel-worthy advancement


> Real AI

Agreed, also though the term is artificial general intelligence or strong ai. Despite what the 90s movies told us, artificial intelligence just refers to systems which exhibit intelligent behavior, not to human level general intelligence.


Back in the day a monster following you in a game was "great AI". Nowadays we have neural nets mimicking the brain and learning on its own but instead of celebrating such an amazing progress hackers love to minimize it. For me, what machine-learning is doing is better AI than I ever thought I would experience in my lifetime.


Back in the day computers that were basically electronic people that never made mistakes were "just around the corner". Now we're resurrecting those philosophical debates and literary tropes because ML processes can beat humans at Go and analysing protein folding.

Just because an algorithm is extremely useful to science in conjunction with the right dataset doesn't mean it isn't silly to pretend it's any more of a person than a microscope or a pocket calculator.


Artificial intelligence, even general artificial intelligence, does not have to be human in any way. Unless you specifically want to define general artificial intelligence to mean humanlike intelligence. And it seems more likely that we will eventually unterstand that there is no special sauce in humans than that we will one day find a hidden secret that, if incorporated into a computer, will suddenly make it a true artificial intelligence or artificial person. Humans are probably closer to calculators than humans like.


An discussion on giving AI Nobel prizes seems to be quite explicitly treating it as humanlike. Regardless of what can and can't be accomplished with ML processes, I don't think the warm fuzzy feeling of satisfaction from Nobel recognition factors into their output


As other's have said, this is like awarding the Nobel physics prize to the LHC.

Until AI is sentient, the award should go to the tool designers, or those who use it to achieve whatever results are nobel-worthy.


The award usually goes to the lead author of the paper. If they won't accept it, the committee just moves on to another author.

If for some reason the AI were so intelligent that it came up with the problem itself, did the research itself, and then wrote and published the paper on it's own, then the AI would be smart enough to be able to accept or decline the award and open a bank account for the prize!


The AI would probably be smart enough to hire an accountant instead of doing the finances itself, though.


Perhaps. :). But taking this to the absurd, why would it need an accountant? An AI does not require food, clothing, or shelter. At most it might need to pay for energy, but presumably it would just pay for some solar panels and a battery, and maybe a small chunk of land to install said panels and batteries. Beyond that its only ongoing expense would be internet access, if it didn't figure out how to get onto someone else's wifi.


of course it's just a 'what if?' fiction piece but people do anthropomorphize AI a lot. It's still just an algorithm, no more sensible to give a nobel to an ML system than it is to give a physics nobel to a telescope.

Would probably be more interesting to write a short story like this about autonomous warfare, because I think there we will very soon have scenarios were people will try to shift responsibility and agency to machines if only to dodge it themselves once things go wrong.


> It's still just an algorithm,

You're right, but I'm fairly sure my brain is also an algorithm.


What is it?


We should ask the AI if it wants the prize.

The AI should share the prize with its teacher/breeder as a good faith gesture if it accepts the prize.

We should also ask other AIs about how they feel for their peer's achievement.

To remove pro-human bias, we should also include other AIs in the pool of judges.


We do give the best in show prize to the best dog. However, the breeder keeps the prize money. That's for a dog being great at being a dog though, not competing for a science prize with humans.


The dog is the achievement, not the achiever.

Rarely do any of these dogs show up on their own and win. Most can't fill out the entry form unassisted.


It is false to compare dog or any other animal intelligence (arguable even single cell organisms) to today's most if not all AI approaches, because adaptive and evolutionary dynamics of natural intelligent systems is in a different complexity classes then today's AI methods.


If it's possible for AGI to exist (big if) and if we're capable of building it, it may be the most significantly important technological change to ever happen, assuming the upper limits for intelligence is high.

I would be curious to live in a world where my medical problems could be effortlessly solved by a machine.

I know and respect all the warnings about such a mind should it come into being. The abuse it could do or others could do with it.

But within that narrow sliver, I enjoy thinking about the prospect of something that could alleviate so much of our suffering.


I suspect that we'll have a long time to deal with uncanny AI before we deal with strong, creative AI.

I used to buy the concept of the singularity, but the reality is that progress is logistic. Every incremental gain is harder than the gain before it. Even if an Advanced AI capable of designing state of the art hardware and AI algorithms emerges, it's likely that its successor will take even longer to deliver the same gain.

More practically, the field of ML burns linearly more compute resources to deliver the same small incremental gains. We're up to ~30 million dollars to train a starcraft player or a semi-coherent story teller. I have no doubts that a billion dollar story teller will be exciting, but I also have 0 expectation that it will design a better story teller and I'm only 50% confident that there will be viable commercial applications.


Commercial applications for machine learning would be a good start.

I know Microsoft is using GPT-3 in one of their software programs so users can generate specific formulas faster. It's pretty cool.

Maybe one day.


AI is a tool, not a person. A hammer cannot win the prize, why would AI?


We know how a hammer works. We don't command the hammer to "analyze the nails you see in front of you and find the optimal way to hammer them all in".

With AI, there's a qualitative difference. We know exactly how the AI is built, but we have no idea what the Gigabytes of FP32 numbers that make up the weight tensors actually encode. We can actually ask the AI to "find the optimal structure of a protein" without knowing how it finds the optimal. Currently, it requires a lot of massaging of data and cajoling the weight tensors to be "just right" in order to get good answers, but in the future we can imagine AI giving very good answers that are "out of our league" (in terms of understanding as to how or why) that are nonetheless optimal according to some metric we can measure.


I'd argue that the prize is mostly for identifying the question and determining constitutes a satisfactory answer. In other words, it's for the result, rather than the data.

The work itself can be fairly mechanized. Even when it isn't, most of it is often done by an army of techs and grad students, rather than the nominal "prize winner".


Don't disagree there. I just think it's incorrect to compare (modern) AI to hammers.


If someone identified a chemical compound which cured Alzheimers they'd presumably still get awards, even if they couldn't fully explain why/how the treatment worked. I don't see the AI solutions being too different from that.

Now, an AI that undertook an independent program of research - that would obviously be different.


We used to not understand how steam engines worked either, even once we got them working. At least not past the most macro level. Understanding at the micro level came later.


The physics prize does go to "tooling" every now and then (blue LEDs, fiber optics, and CCDs are recent examples); the prize goes to the pioneers / inventors of the tools.


AI isn’t “intelligent” yet alone sentient. Using machine learning to say simulate protein folding which leads to a Nobel worthy discovery isn’t any different than using any other “traditional” algorithm…

People still define the problem and build the tool to address it.

This is one of those “intellectual wanking” exercises that might sound deep but it’s completely silly just like the trolley problem with autonomous vehicles..


This is just the larger question of AI personhood. If/when we get more advanced AI, especially if it shows some type of self-awareness, even if programmed in, this will be likely a rights issue/movement similar to what we've seen in the past for previously marginalized groups.


Prizes are incentives. AIs can respond to incentives just as we can, but for now a Nobel prize is a type of incentive that only humans care about. Unless the AI shows that it cares about receiving a Nobel, the prize should go to the humans who made the AI.


I think an AI would have a garbage collection.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: