I agree with you that it is not like the other Nobels, but it seems to me that it is only the Peace Prize that really embodies Alfred Nobel's regret in creating a misused weapon.
The Nobel Peace Prize may be meaningless. That the most powerful man in the world has such a juvenile, psychologically rotten obsession with it that he grins ear to ear upon an illegitimate "secondary award" of it is absolutely not meaningless.
It also has a different committee awarding it - I guess every prize has - but this one is specifically, as designed, decided by a group of parliamentarians selected out of the Norwegian parliament, generally selecting some older ones and trying to represent every party. It's not academics or specialists in the area of peace and diplomacy.
I would say this is the peace prize's fundamental flaw.
How so? People take it at face value and harm other people based on its outputs,
as per the article linked.
> If a human of animal behaving the same were to be describe as intelligent, than so are those systems.
I cannot parse this sentence. Behaving the same as what? What bearing do claims of human or animal intelligence have on me stating that generating liability by trusting probabilistic analysis, and actively harming people, by calling software "intelligent", is not responsible behavior?
The bearing is on the claim that calling software "intelligent" have anything to do with trusting or liability.
Dumb or not, no one is supposed to blindly trust it.
we can call it frediness, but the problem isn't what it's called, the problem is that the profit motive only works when there's competition. if there were several shops nearby and one of them started rejecting 80% of customers based on frediness, they'd go out of business and the others who prefer having a human guard would prosper. if it's the only shop within a 10 mile radius, and it only flags out 1% of innocent people as shoplifters, then the profit motive doesn't provide the balancing force. that's what regulation is for.
IMHO AI will be able to generate solutions close to the best solution for a problem, with the remaining distance to be covered by humans.
the question is whether the cost of using AI to generate a solution, and then fixing it manually, will be lower, or higher, than building the solution using humans.
as we have seen before with wizards and other tools to generate solutions, the cost of fixing the solution is often times higher than it is to just build it from scratch manually.
also, as humans use AI more and more, there will be less and less material to use for training that is high quality, and AI will train on itself, leading to a rapid decline in quality of the output.
The paragraph was my exact problem, hundreds of words but the entire world of AI interacting with the world physically is boiled down into an assumption that "by the time we get there, we'll get there". It just doesn't seem realistic? Especially with predictions/hunches like the ones found in the article.
Others seem to mention it is a sort of feedback look, once we get to 50% AGI I guess that AI itself can help us overcome the physical limitations. I'm sceptical that it is as simple as that, but at the same time can see it as very much possible.
How much do current AIs contribute to their own development? Likely quite a bit, but then again I don't know any AI researchers to ask.
>How much do current AIs contribute to their own development?
Really a pretty fair amount these days as AIs can contribute to their own training data. In addition they've got pretty good at measuring their own success. In the past for example, if you had a robot hammer hitting a nail, you'd have a person judging the quality of the strike, but now you can face a few cameras at it and self judge far better. Yea, you still need some people around, but the amount required has dropped a lot.