> If you think you're specific business case might require an AGI, feel free to hire one of the 3 billion people living in South East Asia that will gladly help you with tasks for a few dollars an hour.
How can a person writing an article about AGI fail to understand what AGI means so badly?
You're complaining that the kind of intelligences you can hire for $2/hour aren't useful/powerful enough. What does that have to do with anything?
By your own argument, people have already created very powerful intelligences - Elon Musk, Alan Turing, etc.
100 of those, with unlimited memory, internet access, and faster thinking speed would already be extremely powerful, enough to take over the world and do whatever they wish with it.
The whole idea of human beings being the pinnacle of intelligence always seemed ridiculous to me. We're the dumbest animals capable of discussing the concept of intelligence. What, the first few somewhat intelligent apes happen to also bump into the theoretical limit for intelligence?
Even minor variation in our genes are enough to account for the difference between Elon Musk and a Joe from the gas station who spends his life in a trailer park, sniffing glue and trying to seduce his cousin. And just because there are enough Joe's already, artificial superintelligence can't exist or be useful?
Having said that, even Joe-level AI's would transform the world more dramatically than any invention before that - they would replace human labor.
Why? What makes you confident in this assertion?
To whoever has downvoted: I doubt it. Surely if you think this is so obvious that my question seems disingenuous, you can clearly articulate why the claim has substantial credibility?
The fact that Elon Musk (and similar brains) already have a ridiculous amount of power and are using it to change the world proves that Elon-Musk level intelligence is enough to gain a lot of power and change the world.
Imagining a team of 100 Elon Musks with unlimited memory/energy/etc/etc taking over the world is not such a stretch.
Surely the null-hypothesis should be “being rich makes it easier to get richer, and their initial breakthrough was luck”? Personally I would guess that there are a lot of smarter or equally smart people who never get close to that level of power.
Take any reasonably intelligent human, say a valedictorian from your hometown high school.
Now consider a system with the same intellectual capacity for reasoning about the world as your valedictorian and give that algorithm a data-center's worth of computational resources. Keep in mind that this system has perfect recall. Keep in mind that the speed at which the system can perform intellectual labor is not constant as it is in humans, but bounded only by the amount of computational resources that the system has access to.
So this system is just as intellectually capable as your valedictorian with one data-center's worth of resources and it has perfect recall. Now what happens if we double those resources and give it two datacenters? Does it become twice as intelligent? Or twice as fast at accomplishing some given intellectual task? We can't say for certain because the answer depends entirely on how such a system might work and since no such system yet exists, we can only speculate. But as it turns out, we don't need to know exactly which dimension it would improve in, only that it would improve in some relevant dimension.
And that isn't even considering the fact that such a system could improve its own architecture further improving in some given dimension relevant to its ability to act intelligently.
Assuming intelligence is capped at the human level is as naively anthropocentric as the old belief that the Earth was the center of the solar system.
As an aside, I had to Google “valedictorian“, as it’s a cultural reference that I have no experiences of. We get our results in a much more boring way where I’m from.
I think that it's clear that brainpower is at the very least an important metric for determining get-stuff-down power as the vast majority of human beings to ever accomplish anything of note (financial, scientific, or military success) all possessed above average intelligence in way or another. I suppose the case might be made for different kinds of intelligence, but presumably something considered worthy of the title of ASI would be competent in any conceivable dimension of intelligence.
And if it is true that brainpower is at least an important component in whatever might make up get-stuff-done power, and assuming that the level of brainpower under a hypothetical ASI's command was effectively (in relation to a human being anyway) unbounded, then whichever other (external?) metrics potentially attributable to get-stuff-done power could presumably be compensated for by the overwhelming weight of the brainpower.
I'm curious what other metrics for get-stuff-done power you might have in mind?
If you literally mean infinite memory, your claim is trivially correct regardless of whether we postulate a team of 100 Elon Musks or a team of any 100 replicated adult humans in the world. Even a modest human intelligence with access to infinite memory can eventually do anything possible in a computational sense. So let's pare that back a bit, because infinities break thought experiments.
Do you mean something like 100 Elon Musks, each with a datacenter's worth of GPUs? Is that sufficiently many to conquer the Earth and do what they wish? If so, why? How would they do it? If you don't have a realistic example for how they'd do it, why are you confident they can?
Obviously "infinite memory" and "taking over the world" are exaggerations. By "infinite memory" I mean "much more than any human have ever had, enough to memorize the internet". And by "taking over the world" I mean "be able to amass enough power to become a serious threat".
Humans, with human intelligence have already been way too close to taking over the world or causing apocalypse or whatever.
If we're talking about non-aligned AGI, we're talking, at the very least, about an ElonMusk+ powered sociopath. Isn't that already a cause for concern?
Or, from a different perspective, imagine a business that employs 100 Elon Musks. The article claims that AGIs already exists and they're "useless". Do they? Are they?
James Clerk Maxwell was a genius, likely more intelligent than Musk, who did not take over the world.
If James Clerk Maxwell was born 10,000 years earlier, he would likely be a respected shaman or an intelligent warrior, but he would not have derived the laws of Electromagnetism without 10,000 years of scientific development.
Likewise, Elon Musk is nothing without the experimentation, empiricism, and the sum total of human knowledge accumulated to this day.
Paraphrasing the linked article, a superintelligent AGI will do nothing because physical resources and crystallized empirical knowledge is the bottleneck to human progress. Not intelligence.
Now I would like to see AGI fanatics address that core thesis.
Due to several reasons, based on ethics, the possibility of violent reaction, etc., there is an upper bound on how much power (read work / time), or more generally performance ( results / resources) we can squeeze out of us and other fellow humans. This upper bound limits the crystalization process that you mention.
Technology enables us to push that upper bound, and it has been applied in a lot of areas - including the improvement of thought assistance (programming, excel, etc.).
But the actual thought is an untapped area that resists automation; and, following Amdahl’s law, the more you optimize a part, the more the non-optimized part weights as the reason we don’t go quicker. You can see this in the growing demand of intellectually intensive jobs in the last couple of centuries, and the huge salaries that elite workers in finance or programming command.
If we reach thought automation that is at least on par with humans, we have just opened the only area that was resisting optimization, and we can suddenly improve entire dimensions that were out of bounds.
And, as a free bonus, we can now more easily convert capital to labor (just deploying more machines and “cloning workers”, instead of raising children).
This could create a runaway effect, or at least have an immense multiplicative effect.
Until now, your central point still stands:
> Elon Musk is nothing without the experimentation, empiricism, and the sum total of human knowledge accumulated to this day.
That /is/ true. But! If AGI is reached, you a) can buy your own Elon, b) can copy it, improve it, experiment with it and generally manage it in ways we currently can’t; and c) you start to accumulate knowledge capital at increasing velocity that is yours and doesn’t decide by itself to die, quit and go elsewhere, or play politics with its power instead of diligently working. So from where we stand, it seems reasonable to expect a “winner takes all” scenario.
In sum: a huge part of the initial value doesn’t come (nor needs) from god-level power, but from “enslaving” thinkers the way we “enslave” production lines, cars and computers. None of this requires something a thousandfold more intelligent than the average human. Not at first at least... ;)
Very slowly and not reliably.
> updating their own algorithms,
Slowly and not reliably.
> sharing their algorithms with other AGIs
> and learning new complex skills.
Very slowly, yes
> To add to that, they are energy efficient, you can keep one running for optimally for 5 to 50$/day depending on location, much less than your average server farm used to train a complex ML model.
To run a reasonably complex ML model does not require $1500/month.
And these AGIs require significant resources and space, much of which has to be provided by other AGIs.
> If we disagree on that assumption, than what's stopping the value of human mental labor from sky-rocketing if it's in such demand ? What's stopping hundreds of millions of people with perfectly average working brains from finding employment ?
They're slow and complex and expensive and training scales terribly. Training a model and running 1000 copies of it is not 1000 times as complex as training a model and running just one copy.
If you want to counter the arguments that AGI will be capable of exponential self-improvement, you need to use an analogue other than humans. Humans categorically lack the capability to exponentially self-improve. Likewise human intelligence is definitionally non-alien, which is not something we can say a priori about any successful AGI we create.
Wait, I didn't agree to that.
From where I sit, it looks like Malthus could have been right. We could be living in a hellish equilibrium with famine because the Earth can't provide enough cropland to feed everyone, or enough firewood to keep us warm.
Our escape from Malthus is because of the human mind. Some really smart people invented irrigation, terracing, crop rotation, selective breeding and genetic engineering, to improve the efficiency of agriculture by orders of magnitude. Likewise, we've found alternative fuels, from wood to wax to whale blubber to fossil fuels to nukes (and hopefully still inventing!).
But we've taken all that low-hanging fruit, so we need greater reservoirs of creativity and engineering. Adding more smarts is how we continue to keep ahead of Malthus.
there are an estimate of 7,714,576,923 AGI
algorithms residing upon our planet. You can
use the vast majority of them for less than 2$/hour
most of it goes to waste
Also a human is pretty big, needs to eat and fart. A phone one the other hand is pretty small and only needs battery.
Superhuman artificial general intelligence
is not something that we can define
Err, people pay way beyond $2/hour for translations...
source: have paid a lot for good quality translations in the last few years.
I ran into one of his other posts here  that contains some similar anti-gems (I don't want to pick and choose out of context) that tell me he has a knack for selecting a subject and going at it completely sideways, missing most of the substance. I found his characterization of experienced engineers who choose not to focus on the latest fads extremely narrow-minded and ultimately, dead wrong. His 14 year-old up-to-date "kid" that can teach an experienced engineer "new tricks" is particularly amusing.
That points out an interesting human bias problem about identifying intelligence, we're really good at dehumanizing our enemies, political, or whatever else, so presumably a theoretical AI not coincidentally matching the observers arbitrary standards of religion or politics or maybe even fashion would be viciously attacked as not being intelligent regardless of actual performance.
Oceania has always been at war with Eastasia, therefore this AI is obviously not intelligent.
And that's a BIG change.
Even if the AI only reaches the human intelligence level, the AI does not sleep, does not need to rest, no need of food, does not die and can travel from one office to the other by the speed of light.
We cannot compete.
>Whilst I won't question their presumed intelligence, their achievements are rather unimpressive
We talkin' about the same Terry Tao, Fields Medalist?
Only thing I can think is multidimensionality. I can't get my head around what problems you can have in say 1000-dimensional space. (One of my big wishes for virtual reality would be that someone built there a 4 dimensional space with some wristband sensors that would indicate orientaion in 4th dimension. Just to see if I could start to 'get' 4 dimensional space)
Another question I'd like to understand regarding superintelligence is that how that would be capable to overrule humanity? I mean, we have people around here that are really intelligent compared to general population, and mostly they are just ignored. Why would we not just ignore superintelligence as well?
2) Just because we don’t have a single precise definition of “intelligent” / just because intelligence isn’t a single dimensional thing, does not mean that we can’t establish that something is or would be more intelligent than something else. Comparing the size of two boxes isn’t always well defined, but if one fits inside the other, the other is clearly larger.
3) People in AI safety seem to make it fairly clear that what they mean by intelligence is the ability to, among the available options, to select options to effectively further goals.
4) This fourth thing isn’t necessarily a criticism so much as a note, but, the article/post seems rather opposed to things like the “great man theory of history”?
Personally, I am not afraid of AGI, but of million smaller specialized AIs that quantify and assess the smallest details of our lives. Those AIs will form the basis of predictions for more and more essential services.
These maybe won't even be significantly more precise than classic fortune tellers in the past, but the result will be treated as gospel and there are few left to question the context. Especially not some lame human.
"The system proves that your online behavior is indicative of a mass murderer. Take him away boys."
It's like having a parent who believes in you. :D
There are 0 AGI's on earth. I recognize the author is intentionally twisting the definition of "AGI" since the imprecise nature of the field can lead to some debate about the definition, but I think it makes the most sense to side with the most popular definition which is, "Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can."
No such entities or algorithms exist that could reasonably be considered to fall under this definition.
The goal of the AI field of study is not necessarily to create a system that emulates a human. This is only a proposed solution to accomplishing the actual goal which is to create something with the same intellectual capacity as a human. It is an important distinction.
Yes, if the goal was to simply create an artificial human and then stop there without giving the system the ability to further improve itself then it would make no sense to spend decades of man hours on the task because we could just do it the old fashioned way and spend approximately twenty years raising a child to maturity.
But no, the goal is to create a system capable of reasoning in a way sufficiently similar to a human that it could reasonably be expected of performing adequately in any situation that a human might. Given that this could be accomplished, presumably the intelligence of the system could be scaled according to the amount of computational power is fed into it, propelling it past the most intelligent humans that have lived so far. Consider a system twice as intelligent as Einstein with the ability to reflect on its own architecture and then modify that architecture, thus further increasing its own intelligence.
From there, the hope is that something orders of magnitude more intelligent than the most intelligent humans to ever live could more aptly extract observations about the world given the same amount of data as humans and then conceive of clever ways to overcome the limitations of "finite resources."
The author seems hopelessly out of touch with the actual aims of the field he has chosen to criticize and so I am unsure why they ever thought they had any place to (attempt to) make their critique in the first place.
And it doesn't "self improve" at all.
We might never be able to emulate a human brain good enough with our technology and resources. Including tomorrow's technology, either for the same reason we will never get 0.1 nanometer chips or sub-zero roundtrip latency around the earth: the laws of physics, or for the reason that it's too complex (and there's no signed contract with the universe that humans will eventually be able to handle any complexity level in their technology. We might hit several hard problems where we don't have better ideas, or better concepts/inventions are far beyond our grasp).
We’re also full of biases.
When we focus time and effort on things like the Manhattan Project or the Apollo Program, we are able to accomplish a lot in a short time.
How many other machines have we created that are impossible to turn off or have endless supply of uninterruptible power?
Due to the difference in intelligence — sophistication of planning and understanding of consequences — wouldn’t it be trivial to trick your master into doing things which weakened his control over you?
Might you do this not out of malice, but because you believed the 8-year-old was not competent and a danger to both of you while in charge?
The risk is that we will not hit the off button because we won’t understand that we’re in dangerous territory until it’s much too late, and the AI has copied itself, secured the loyalty of the military, or something else we can’t foresee as a liberating maneuver for it.
What if you were in that situation, but were incorrect about what things are good, and, while you had a better understanding of what actions would result in what results, the 8 year old had a better understanding of what is good?
The 8-year-old might view your intelligent plans as a terrible abuse:
>BUT I WANT TO EAT TEN MORE TWINKIES, YOU'RE SUPPOSED TO SERVE ME WHAT I WANT, NOT DEPRIVE ME
Could we similarly be wrong if we think a future superintelligent AI is abusing us? Should we consider submitting to it?
For one, because it can be progressively all the more entwined to daily, economy, etc, to be able to easily "turn off" when we want it.
We might consider Facebook, cars, fossil fuels, etc harmful, but it's not easy to "turn them off", even if many people would wish so. Some people and private interests will be invested in preserving them, alternatives might not be available for the good parts of what they do, and so on. At best, it can take years to get off of them totally, after we take the initial decision.
Second, re a general purpose AI, if we make them able to walk around (e.g. in a robotic body) they could go find their own power. We don't have an "endless supply of uninterruptible power" either, but we go and get food when we need it, and you need to arrest/kill us to turn us off.
>How many other machines have we created that are impossible to turn off or have endless supply of uninterruptible power?
If you include artificial life that we created and it escaped from the lab, I'd say a lot (microbes, altered organisms where we changed various genes, etc). Those can be impossible to "turn off" if they get to nature, and some of those could have dire consequences (e.g. replace, infest, or eat into some other ecologically needed species).
well, you will notice the process itself doesn't sound so sci-fi. the complicated part is the first part. AIs are not so clever, and we don't even have any idea on how to make them so clever in the first place. if we had and we were stupid enough, it would still be unlikely that the AI could find ways to replicate so easily, because it would probably need impressive hardware to run, not your regular laptop, but you get the idea. but once it "escapes", we are rather ... you know.
no real need to worry about these for the moment though, more worrisome are the real algorithms that trade stocks and are already "controlling" our economies (well, you get the point)
Also part of the point is that as we rely more and more on computer networks for communications we also become more and more susceptible to be tricked by an attacker from inside the network.
As an example if we had to shut off international food deliveries for a week it would already significant consequences. As another example a "super intelligent" attacker could try to escalate global conflicts.
I honestly believe the much of the fatalism related to the singularity is blown out of proportions, but it is not like it does not have solid ground below it.
good luck :)
Problem solving is one area, but this is not the fundamental cause  of why human brains evolved toward what they are now. Survival, necessitated by and coupled with reproduction makes life exist at the current scale, like with any (behavioral) pattern which favours existence and self-propagation. What we call life IS intelligence.
 Or rather, explanation, with the fundamental cause being "physics, as it is"
If you just mean that there needs to be a way of determining, in some cases, that one thing is “more intelligent” than another, then yes, that is necessary for “intelligence” to be a useful notion.
But we do have methods of doing that, so there is no problem there.
Note that the way we have of comparing intelligence needn’t be a linear order. It can be a partial order.
If "general" truly meant general purpose, the measure might be how well the system arranges every possible (sequence of) combination(s) of matter and energy in space and time, while optimizing along any combination of these.
But that in itself is not a single objective, but a set of objectives, and not really a utility function.
I agree with this assessment, and I think this is a much better argument against AI danger than the ones I use (which boil down to unfalsifiability, violation of the laws of entropy, and a lack of astronomical evidence of celestial AGIs creating paperclip planets).
I just got out of the shower, so I'll add my recent showerthought: In a human brain, each neuron is a CPU in and of itself; this means that every neuron in the human brain can run in parallel. In modern computers, the "perceptrons" are virtual, and there are only between 1 and 4 CPUs to allow activation of a specific "perceptron" at a given time.
So if you have a biological brain with 100 billion neurons sitting next to a virtual brain with 100 billion perceptrons, the virtual brain will run 100 billion times slower even with equivalent intelligence.
I think many of the AI algorithms can run on GPUs, where core counts can get into the thousands (still very far from millions). But these are largely independent — the interconnects between GPUs are not on the scale of neurons.
My guess: we develop specialized hardware that, over time, will increasingly resemble organic brain structure.
And if what you care about is running a wide variety of mainstream stats or machine learning algorithms, the "specialized hardware" you mention is the modern GPU, or something very much like it - I don't see any low-hanging fruit that might make some other HW architecture more feasible. Fixed-point compute might get there someday, but it's way too fiddly at present - it's not really clear how to "tweak" mainstream learning algorithms so as to dispense with the large dynamic range that floating point compute provides.
why? If I try to reconstruct argument I can only come up with reasoning that would show also evolution violates entropy laws.
So intelligence = entropy production and energy consumption, and so it cannot grow out of control and transform the planet overnight.
The reason Maxwell's Demon cannot exist is that such a device would need to produce entropy
Which means that the level of intelligence an AGI can obtain is limited to Earth's energy input. And the impact that AGI has on the planet is also limited by Earth's natural processes.
Which means that a hypothetical paperclip maximizer that turns humanity into paperclips and that becomes too intelligent to be turned off is a ridiculous scenario that ignores the potential limitations of an AGI.
If 10,000 people's entire purpose in life was to turn you into paperclips, I don't think you could outrun them for long.
We don't see evidence of life on other planets, and certainly not any evidence of superintelligence gone awry; a planet filled with paperclips is in an extremely low-entropy state. Evolution can happen because human beings, despite being low-entropy beings, turn low-entropy things like bananas and apples into high-entropy things like poop and carbon dioxide. This delicate process is likely why humans are relatively easy to kill, and why a superintelligent AGI would be unlikely to overpower mankind's dominant position on Earth.
So the assumption that a superintelligent computer program could suddenly harness 100% of the Earth's solar energy and outsmart humans to the point where it can't be turned off is still a highly unlikely prossibility. The arguments in favor of AI danger are easy to make but likely wrong, the arguments against AI danger are complex, require lots of thought, but are most likely correct.