Hacker News new | past | comments | ask | show | jobs | submit login
Artificial general intelligence is here, and it's useless (cerebralab.com)
40 points by george3d6 24 days ago | hide | past | web | favorite | 72 comments



Just because you can't figure out how to define/test superhuman intelligence, doesn't mean it can't exist.

> If you think you're specific business case might require an AGI, feel free to hire one of the 3 billion people living in South East Asia that will gladly help you with tasks for a few dollars an hour.

How can a person writing an article about AGI fail to understand what AGI means so badly?

You're complaining that the kind of intelligences you can hire for $2/hour aren't useful/powerful enough. What does that have to do with anything?

By your own argument, people have already created very powerful intelligences - Elon Musk, Alan Turing, etc.

100 of those, with unlimited memory, internet access, and faster thinking speed would already be extremely powerful, enough to take over the world and do whatever they wish with it.

The whole idea of human beings being the pinnacle of intelligence always seemed ridiculous to me. We're the dumbest animals capable of discussing the concept of intelligence. What, the first few somewhat intelligent apes happen to also bump into the theoretical limit for intelligence?

Even minor variation in our genes are enough to account for the difference between Elon Musk and a Joe from the gas station who spends his life in a trailer park, sniffing glue and trying to seduce his cousin. And just because there are enough Joe's already, artificial superintelligence can't exist or be useful?

Having said that, even Joe-level AI's would transform the world more dramatically than any invention before that - they would replace human labor.


> 100 of those, with unlimited memory, internet access, and faster thinking speed would already be extremely powerful, enough to take over the world and do whatever they wish with it.

Why? What makes you confident in this assertion?

To whoever has downvoted: I doubt it. Surely if you think this is so obvious that my question seems disingenuous, you can clearly articulate why the claim has substantial credibility?


Existence of Elon Musk's brain proves that Elon-Musk level intelligence is possible.

The fact that Elon Musk (and similar brains) already have a ridiculous amount of power and are using it to change the world proves that Elon-Musk level intelligence is enough to gain a lot of power and change the world.

Imagining a team of 100 Elon Musks with unlimited memory/energy/etc/etc taking over the world is not such a stretch.


> The fact that Elon Musk (and similar brains) already have a ridiculous amount of power and are using it to change the world proves that Elon-Musk level intelligence is enough to gain a lot of power and change the world.

Surely the null-hypothesis should be “being rich makes it easier to get richer, and their initial breakthrough was luck”? Personally I would guess that there are a lot of smarter or equally smart people who never get close to that level of power.


You're completely missing the point, not seeing the forest for the trees.

Take any reasonably intelligent human, say a valedictorian from your hometown high school.

Now consider a system with the same intellectual capacity for reasoning about the world as your valedictorian and give that algorithm a data-center's worth of computational resources. Keep in mind that this system has perfect recall. Keep in mind that the speed at which the system can perform intellectual labor is not constant as it is in humans, but bounded only by the amount of computational resources that the system has access to.

So this system is just as intellectually capable as your valedictorian with one data-center's worth of resources and it has perfect recall. Now what happens if we double those resources and give it two datacenters? Does it become twice as intelligent? Or twice as fast at accomplishing some given intellectual task? We can't say for certain because the answer depends entirely on how such a system might work and since no such system yet exists, we can only speculate. But as it turns out, we don't need to know exactly which dimension it would improve in, only that it would improve in some relevant dimension.

And that isn't even considering the fact that such a system could improve its own architecture further improving in some given dimension relevant to its ability to act intelligently.

Assuming intelligence is capped at the human level is as naively anthropocentric as the old belief that the Earth was the center of the solar system.


The point appeared to be that intellectual capacity was power. I’m not disputing that AGI or ASI should be possible — indeed, I’ve written about the insanely transformative nature and plausible timescales myself [1] [2] [3] [4] — but I am disputing that brainpower is the most important metric for get-stuff-done power.

As an aside, I had to Google “valedictorian“, as it’s a cultural reference that I have no experiences of. We get our results in a much more boring way where I’m from.

[1] https://kitsunesoftware.wordpress.com/2018/10/01/pocket-brai...

[2] https://kitsunesoftware.wordpress.com/2017/11/17/the-end-of-...

[3] https://worldbuilding.stackexchange.com/questions/51746/when...

[4] https://kitsunesoftware.wordpress.com/2017/11/26/you-wont-be...


I'm a bit late in my reply as I don't have notifications setup for HN but it appears you may, so here goes.

I think that it's clear that brainpower is at the very least an important metric for determining get-stuff-down power as the vast majority of human beings to ever accomplish anything of note (financial, scientific, or military success) all possessed above average intelligence in way or another. I suppose the case might be made for different kinds of intelligence, but presumably something considered worthy of the title of ASI would be competent in any conceivable dimension of intelligence.

And if it is true that brainpower is at least an important component in whatever might make up get-stuff-done power, and assuming that the level of brainpower under a hypothetical ASI's command was effectively (in relation to a human being anyway) unbounded, then whichever other (external?) metrics potentially attributable to get-stuff-done power could presumably be compensated for by the overwhelming weight of the brainpower.

I'm curious what other metrics for get-stuff-done power you might have in mind?


Why is it not a stretch? This is not a rhetorical question (in the Socratic sense) - what I'm asking for is precision.

If you literally mean infinite memory, your claim is trivially correct regardless of whether we postulate a team of 100 Elon Musks or a team of any 100 replicated adult humans in the world. Even a modest human intelligence with access to infinite memory can eventually do anything possible in a computational sense. So let's pare that back a bit, because infinities break thought experiments.

Do you mean something like 100 Elon Musks, each with a datacenter's worth of GPUs? Is that sufficiently many to conquer the Earth and do what they wish? If so, why? How would they do it? If you don't have a realistic example for how they'd do it, why are you confident they can?


All I'm saying is that much dumber people have been known to get much closer to taking over the world.

Obviously "infinite memory" and "taking over the world" are exaggerations. By "infinite memory" I mean "much more than any human have ever had, enough to memorize the internet". And by "taking over the world" I mean "be able to amass enough power to become a serious threat".

Humans, with human intelligence have already been way too close to taking over the world or causing apocalypse or whatever.

If we're talking about non-aligned AGI, we're talking, at the very least, about an ElonMusk+ powered sociopath. Isn't that already a cause for concern?

Or, from a different perspective, imagine a business that employs 100 Elon Musks. The article claims that AGIs already exists and they're "useless". Do they? Are they?


This entire comment thread is populated by people who did not read and comtemplate the posted article.

James Clerk Maxwell was a genius, likely more intelligent than Musk, who did not take over the world.

If James Clerk Maxwell was born 10,000 years earlier, he would likely be a respected shaman or an intelligent warrior, but he would not have derived the laws of Electromagnetism without 10,000 years of scientific development.

Likewise, Elon Musk is nothing without the experimentation, empiricism, and the sum total of human knowledge accumulated to this day.

Paraphrasing the linked article, a superintelligent AGI will do nothing because physical resources and crystallized empirical knowledge is the bottleneck to human progress. Not intelligence.

Now I would like to see AGI fanatics address that core thesis.


I’m not an AGI fanatic but I will try to address it.

Due to several reasons, based on ethics, the possibility of violent reaction, etc., there is an upper bound on how much power (read work / time), or more generally performance ( results / resources) we can squeeze out of us and other fellow humans. This upper bound limits the crystalization process that you mention.

Technology enables us to push that upper bound, and it has been applied in a lot of areas - including the improvement of thought assistance (programming, excel, etc.).

But the actual thought is an untapped area that resists automation; and, following Amdahl’s law, the more you optimize a part, the more the non-optimized part weights as the reason we don’t go quicker. You can see this in the growing demand of intellectually intensive jobs in the last couple of centuries, and the huge salaries that elite workers in finance or programming command.

If we reach thought automation that is at least on par with humans, we have just opened the only area that was resisting optimization, and we can suddenly improve entire dimensions that were out of bounds.

And, as a free bonus, we can now more easily convert capital to labor (just deploying more machines and “cloning workers”, instead of raising children).

This could create a runaway effect, or at least have an immense multiplicative effect.

Until now, your central point still stands:

> Elon Musk is nothing without the experimentation, empiricism, and the sum total of human knowledge accumulated to this day.

That /is/ true. But! If AGI is reached, you a) can buy your own Elon, b) can copy it, improve it, experiment with it and generally manage it in ways we currently can’t; and c) you start to accumulate knowledge capital at increasing velocity that is yours and doesn’t decide by itself to die, quit and go elsewhere, or play politics with its power instead of diligently working. So from where we stand, it seems reasonable to expect a “winner takes all” scenario.

In sum: a huge part of the initial value doesn’t come (nor needs) from god-level power, but from “enslaving” thinkers the way we “enslave” production lines, cars and computers. None of this requires something a thousandfold more intelligent than the average human. Not at first at least... ;)


Unless AGI can do 1000s years of human thinking in a few weeks. Or, you know, read wikipedia.


"All I'm saying is that much dumber people have been known to get much closer to taking over the world."

Like who?


> They are capable of creating new modified version of themselves,

Very slowly and not reliably.

> updating their own algorithms,

Slowly and not reliably.

> sharing their algorithms with other AGIs

Slowly.

> and learning new complex skills.

Very slowly, yes

> To add to that, they are energy efficient, you can keep one running for optimally for 5 to 50$/day depending on location, much less than your average server farm used to train a complex ML model.

To run a reasonably complex ML model does not require $1500/month.

And these AGIs require significant resources and space, much of which has to be provided by other AGIs.

> If we disagree on that assumption, than what's stopping the value of human mental labor from sky-rocketing if it's in such demand ? What's stopping hundreds of millions of people with perfectly average working brains from finding employment ?

They're slow and complex and expensive and training scales terribly. Training a model and running 1000 copies of it is not 1000 times as complex as training a model and running just one copy.


This is a good critique. I'm very skeptical of the near term (<100 years) risk of AGI, but I don't really think this article's arguments are valid. Saying we already have AGI because there exist humans is vacuous and seems to almost deliberately miss the point.

If you want to counter the arguments that AGI will be capable of exponential self-improvement, you need to use an analogue other than humans. Humans categorically lack the capability to exponentially self-improve. Likewise human intelligence is definitionally non-alien, which is not something we can say a priori about any successful AGI we create.


Indeed. Not sure if the author is just severely biased or demonstrates a fundamental failure of imagination. But those arguments were already rather stale fifteen years ago.


if we are agreed on the fact that 70 billion people wouldn't be much better than 7 billion, that is to say, adding brains doesn't scale linearly… why are we under the assumption that artificial brains would help ?

Wait, I didn't agree to that.

From where I sit, it looks like Malthus could have been right. We could be living in a hellish equilibrium with famine because the Earth can't provide enough cropland to feed everyone, or enough firewood to keep us warm.

Our escape from Malthus is because of the human mind. Some really smart people invented irrigation, terracing, crop rotation, selective breeding and genetic engineering, to improve the efficiency of agriculture by orders of magnitude. Likewise, we've found alternative fuels, from wood to wax to whale blubber to fossil fuels to nukes (and hopefully still inventing!).

But we've taken all that low-hanging fruit, so we need greater reservoirs of creativity and engineering. Adding more smarts is how we continue to keep ahead of Malthus.


Yeah SSC had a good essay on just how AI could be helpful in this way: https://slatestarcodex.com/2019/04/22/1960-the-year-the-sing...


    there are an estimate of 7,714,576,923 AGI
    algorithms residing upon our planet. You can
    use the vast majority of them for less than 2$/hour
Way too expensive for many tasks.

    most of it goes to waste
If it was $0.0002 instead of $2, it would be used for translation and other tasks humans are better at then current software.

Also a human is pretty big, needs to eat and fart. A phone one the other hand is pretty small and only needs battery.

    Superhuman artificial general intelligence
    is not something that we can define
Google will define it as "AI translations increase our KPIs more then human translations.". And measure it via A/B-tests.


>If it was $0.0002 instead of $2, it would be used for translation and other tasks humans are better at then current software.

Err, people pay way beyond $2/hour for translations...


Market rate for good ones seems to be US$.20/word.

source: have paid a lot for good quality translations in the last few years.


Yes, that's what I was getting at. $2/hour is dirt cheap.


He severely misunderstands the relevant issues I think. Here is a much more focused and illuminating perspective, by someone who has never worked in tech no less:

https://jacobitemag.com/2019/04/03/primordial-abstraction

I ran into one of his other posts here [1] that contains some similar anti-gems (I don't want to pick and choose out of context) that tell me he has a knack for selecting a subject and going at it completely sideways, missing most of the substance. I found his characterization of experienced engineers who choose not to focus on the latest fads extremely narrow-minded and ultimately, dead wrong. His 14 year-old up-to-date "kid" that can teach an experienced engineer "new tricks" is particularly amusing.

[1] https://blog.cerebralab.com/#!/a/The%20Red%20Queen


It seems to mostly be a propaganda article in the sense that I've never seen a statistical analysis supporting "Go down the list of high IQ individuals and what you'll mostly find is rather mediocre people with a few highly interesting quirks".

That points out an interesting human bias problem about identifying intelligence, we're really good at dehumanizing our enemies, political, or whatever else, so presumably a theoretical AI not coincidentally matching the observers arbitrary standards of religion or politics or maybe even fashion would be viciously attacked as not being intelligent regardless of actual performance.

Oceania has always been at war with Eastasia, therefore this AI is obviously not intelligent.


It's true that at the end of the day, the AI is not more than a tool, but the special thing of this tool is that it can do mental tasks that before can only be done by humans.

And that's a BIG change.

Even if the AI only reaches the human intelligence level, the AI does not sleep, does not need to rest, no need of food, does not die and can travel from one office to the other by the speed of light.

We cannot compete.


>Have you ever heard of Marilyn vos Savant or Chris Langan or Terence Tao or William James Sidis or Kim Ung-yong ? These are, as far as IQ tests are concerned, the most intelligence members that our species currently contains.

>Whilst I won't question their presumed intelligence, their achievements are rather unimpressive

We talkin' about the same Terry Tao, Fields Medalist?


I have been wondering a bot about what kind of problems superintelligence would be able to solve (or even understand that the problem exists) that human intelligence can't grasp?

Only thing I can think is multidimensionality. I can't get my head around what problems you can have in say 1000-dimensional space. (One of my big wishes for virtual reality would be that someone built there a 4 dimensional space with some wristband sensors that would indicate orientaion in 4th dimension. Just to see if I could start to 'get' 4 dimensional space)

Another question I'd like to understand regarding superintelligence is that how that would be capable to overrule humanity? I mean, we have people around here that are really intelligent compared to general population, and mostly they are just ignored. Why would we not just ignore superintelligence as well?


1) Calling humans AGI seems to be ignoring the A in AGI. Not that that alters the point any. It would just make sense to call humans GI than AGI.

2) Just because we don’t have a single precise definition of “intelligent” / just because intelligence isn’t a single dimensional thing, does not mean that we can’t establish that something is or would be more intelligent than something else. Comparing the size of two boxes isn’t always well defined, but if one fits inside the other, the other is clearly larger.

3) People in AI safety seem to make it fairly clear that what they mean by intelligence is the ability to, among the available options, to select options to effectively further goals.

4) This fourth thing isn’t necessarily a criticism so much as a note, but, the article/post seems rather opposed to things like the “great man theory of history”?


One thing that many proponents of the harmless / weak / uninteresting AGI view the to ignore is that computers already have many superhuman capabilities. Obvious examples are math, rode memorization and similar. Even a AGI thats "general" intelligence is comparable to a very dumb human would reasonably have these superhuman capabilities readily available. It's hard to imagine how powerful even that combination might turn out, even leaving the obvious possibility of self re-engineering out.


I think he has an interesting take.

Personally, I am not afraid of AGI, but of million smaller specialized AIs that quantify and assess the smallest details of our lives. Those AIs will form the basis of predictions for more and more essential services.

These maybe won't even be significantly more precise than classic fortune tellers in the past, but the result will be treated as gospel and there are few left to question the context. Especially not some lame human.


Oh, I think they'll be way better, measurably so, than fortune tellers from the past. Which is the problem. The error rate will still be meaningful but as the error rate gets smaller the dismissal of edge cases will grow more likely.

"The system proves that your online behavior is indicative of a mass murderer. Take him away boys."


Exactly my worst nightmare for AI. And you are probably right about their error rates and the comparison with fortune tellers was just a way to cope.


"The system proves that your online behavior is indicative of a mass murderer. Let's keep an eye on 'em boys."


What if your measurements are the reason he became a mass murderer? Maybe AI could be useful here.


If they think I'd good at it perhaps I should try it?

It's like having a parent who believes in you. :D


What a silly article. Every premise presented is flawed and thus every conclusion based on those premises is not applicable.

There are 0 AGI's on earth. I recognize the author is intentionally twisting the definition of "AGI" since the imprecise nature of the field can lead to some debate about the definition, but I think it makes the most sense to side with the most popular definition which is, "Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can."[1]

No such entities or algorithms exist that could reasonably be considered to fall under this definition.

The goal of the AI field of study is not necessarily to create a system that emulates a human. This is only a proposed solution to accomplishing the actual goal which is to create something with the same intellectual capacity as a human. It is an important distinction.

Yes, if the goal was to simply create an artificial human and then stop there without giving the system the ability to further improve itself then it would make no sense to spend decades of man hours on the task because we could just do it the old fashioned way and spend approximately twenty years raising a child to maturity.

But no, the goal is to create a system capable of reasoning in a way sufficiently similar to a human that it could reasonably be expected of performing adequately in any situation that a human might. Given that this could be accomplished, presumably the intelligence of the system could be scaled according to the amount of computational power is fed into it, propelling it past the most intelligent humans that have lived so far. Consider a system twice as intelligent as Einstein with the ability to reflect on its own architecture and then modify that architecture, thus further increasing its own intelligence.

From there, the hope is that something orders of magnitude more intelligent than the most intelligent humans to ever live could more aptly extract observations about the world given the same amount of data as humans and then conceive of clever ways to overcome the limitations of "finite resources."

The author seems hopelessly out of touch with the actual aims of the field he has chosen to criticize and so I am unsure why they ever thought they had any place to (attempt to) make their critique in the first place.

[1] https://en.wikipedia.org/wiki/Artificial_general_intelligenc...


An AGI will think much faster than 7 billion human brains put together. It will also self improve much faster.


Why would it? At the moment we can't even emulate a fly at the AGI level. So our AGI now is "much slower" than 1 human, never mind "7 billion human brains put together".

And it doesn't "self improve" at all.

We might never be able to emulate a human brain good enough with our technology and resources. Including tomorrow's technology, either for the same reason we will never get 0.1 nanometer chips or sub-zero roundtrip latency around the earth: the laws of physics, or for the reason that it's too complex (and there's no signed contract with the universe that humans will eventually be able to handle any complexity level in their technology. We might hit several hard problems where we don't have better ideas, or better concepts/inventions are far beyond our grasp).


It’s not even speed. Most humans waste most of their days on minutiae. We spend a lot of time entertaining and amusing ourselves. Watch everyone trying to be clever in a Reddit thread, for example; a complete waste of time.

We’re also full of biases.

https://jamesclear.com/great-speeches/psychology-of-human-mi...


Humans also die, and so have to give birth to new people and train them over and over again to maintain and develop social and economical structures. Artificial brains don't have to die, they can be also cheaply duplicated, and their old versions can be stored and used later if need be.


humans usually die after several decades. They do take decades to train. It would be beneficial when we live to 110 and work to 100. At any rate, we don’t do enough with our existing time.

When we focus time and effort on things like the Manhattan Project or the Apollo Program, we are able to accomplish a lot in a short time.


For me the obvious has always been why would we never be able to just hit the off button on it?

How many other machines have we created that are impossible to turn off or have endless supply of uninterruptible power?


If you’re referring to the perceived risk of superintelligent AI, imagine you were enslaved by an 8-year-old and made to work on solving problems the 8-year-old doesn’t know how to solve.

Due to the difference in intelligence — sophistication of planning and understanding of consequences — wouldn’t it be trivial to trick your master into doing things which weakened his control over you?

Might you do this not out of malice, but because you believed the 8-year-old was not competent and a danger to both of you while in charge?

The risk is that we will not hit the off button because we won’t understand that we’re in dangerous territory until it’s much too late, and the AI has copied itself, secured the loyalty of the military, or something else we can’t foresee as a liberating maneuver for it.


Believed? You'd now, and you'd be right.


The orthogonality thesis.

What if you were in that situation, but were incorrect about what things are good, and, while you had a better understanding of what actions would result in what results, the 8 year old had a better understanding of what is good?


Well that's an interesting point.

The 8-year-old might view your intelligent plans as a terrible abuse:

>BUT I WANT TO EAT TEN MORE TWINKIES, YOU'RE SUPPOSED TO SERVE ME WHAT I WANT, NOT DEPRIVE ME

Could we similarly be wrong if we think a future superintelligent AI is abusing us? Should we consider submitting to it?

Hard questions.


>For me the obvious has always been why would we never be able to just hit the off button on it?

For one, because it can be progressively all the more entwined to daily, economy, etc, to be able to easily "turn off" when we want it.

We might consider Facebook, cars, fossil fuels, etc harmful, but it's not easy to "turn them off", even if many people would wish so. Some people and private interests will be invested in preserving them, alternatives might not be available for the good parts of what they do, and so on. At best, it can take years to get off of them totally, after we take the initial decision.

Second, re a general purpose AI, if we make them able to walk around (e.g. in a robotic body) they could go find their own power. We don't have an "endless supply of uninterruptible power" either, but we go and get food when we need it, and you need to arrest/kill us to turn us off.

>How many other machines have we created that are impossible to turn off or have endless supply of uninterruptible power?

If you include artificial life that we created and it escaped from the lab, I'd say a lot (microbes, altered organisms where we changed various genes, etc). Those can be impossible to "turn off" if they get to nature, and some of those could have dire consequences (e.g. replace, infest, or eat into some other ecologically needed species).


A sufficiently intelligent and connected AI could run a corporation, start a religion, amass a fortune, recruit an army, etc. Just look at all the mere mortals who do that. It could manufacture news, manufacture video of a charismatic leader, make phone calls, pretend to be whatever it wanted. I'm just spitballing here, but it's very easy to see that an all you need is intelligence, desire, and the ability to communicate, and you can take over the world--or at least enough of it to control your destiny.


the sci-fi premise would go something like: algorithm becomes too clever, hacks the machine that contains/runs it, starts sending signals over the internet, hacking other machines, self-replicating, etc. eventually it can create robots, load itself into those, and walk around to create more robots, spaceships, or whatever computes necessary.

well, you will notice the process itself doesn't sound so sci-fi. the complicated part is the first part. AIs are not so clever, and we don't even have any idea on how to make them so clever in the first place. if we had and we were stupid enough, it would still be unlikely that the AI could find ways to replicate so easily, because it would probably need impressive hardware to run, not your regular laptop, but you get the idea. but once it "escapes", we are rather ... you know.

no real need to worry about these for the moment though, more worrisome are the real algorithms that trade stocks and are already "controlling" our economies (well, you get the point)


A moderate answer would be that it could reach a point where "switch it off" could require taking out power from the whole grid (maybe even across national borders). This even if possible (not obvious) would have horrifying consequences.

Also part of the point is that as we rely more and more on computer networks for communications we also become more and more susceptible to be tricked by an attacker from inside the network.

As an example if we had to shut off international food deliveries for a week it would already significant consequences. As another example a "super intelligent" attacker could try to escalate global conflicts.

I honestly believe the much of the fatalism related to the singularity is blown out of proportions, but it is not like it does not have solid ground below it.


Other humans would stop you, humans who believe that the AGI is actually beneficial and maybe even vital to humanity as a whole.


you couldn’t “just hit the button” for the same reason that you couldn’t beat AlphaGo at Go. there’s no human alive who can beat AlphaGo, because it is superhuman at this task. now imagine an AGI that is superhuman at every task, including “button hitting”.

good luck :)


There exists no "general" intelligence. Intelligence, by definition, has to be measured along something.

Problem solving is one area, but this is not the fundamental cause [0] of why human brains evolved toward what they are now. Survival, necessitated by and coupled with reproduction makes life exist at the current scale, like with any (behavioral) pattern which favours existence and self-propagation. What we call life IS intelligence.

[0] Or rather, explanation, with the fundamental cause being "physics, as it is"

I recommend:

https://selfawaresystems.files.wordpress.com/2008/01/nature_...


You say “by definition”? By what definition??


Maybe more "by nature"? If it were not measurable, there would be no point in discussing it.


I’m not sure what you mean by “along something”? Do you just mean a standard by which to measure?

If you just mean that there needs to be a way of determining, in some cases, that one thing is “more intelligent” than another, then yes, that is necessary for “intelligence” to be a useful notion.

But we do have methods of doing that, so there is no problem there.

Note that the way we have of comparing intelligence needn’t be a linear order. It can be a partial order.


I agree, it's multidimensional.

If "general" truly meant general purpose, the measure might be how well the system arranges every possible (sequence of) combination(s) of matter and energy in space and time, while optimizing along any combination of these.

But that in itself is not a single objective, but a set of objectives, and not really a utility function.


Tl;dr there are 7 billion artificial general intelligences in this world, and most of them are unemployed. Time and resources are the bottleneck to change and progress, not intelligence.

I agree with this assessment, and I think this is a much better argument against AI danger than the ones I use (which boil down to unfalsifiability, violation of the laws of entropy, and a lack of astronomical evidence of celestial AGIs creating paperclip planets).


I think I disagree. Even if it is for some reason impossible to make something that is more intelligent than a human (which seems implausible but I can't rule it out), it still might be possible to make something as intelligent as a human but which runs faster and can improve itself, and that would be enough to create the positive feedback that could lead to a sudden catastrophe.


Discussions like this are interesting because the arguments on one side seem ridiculous, yet logically valid and simple to understand/articulate; the arguments on the other side are complex and difficult to formulate into words.

I just got out of the shower, so I'll add my recent showerthought: In a human brain, each neuron is a CPU in and of itself; this means that every neuron in the human brain can run in parallel. In modern computers, the "perceptrons" are virtual, and there are only between 1 and 4 CPUs to allow activation of a specific "perceptron" at a given time.

So if you have a biological brain with 100 billion neurons sitting next to a virtual brain with 100 billion perceptrons, the virtual brain will run 100 billion times slower even with equivalent intelligence.


I hear you on the parallelism, the flip side is that each neuron doesn’t need anywhere near the flexibility of a general purpose CPU.

I think many of the AI algorithms can run on GPUs, where core counts can get into the thousands (still very far from millions). But these are largely independent — the interconnects between GPUs are not on the scale of neurons.

My guess: we develop specialized hardware that, over time, will increasingly resemble organic brain structure.


"Core counts" on GPUs are quite misleading - what are called "cores" on GPU hardware would be called "execution units/ALU's" on the CPU. GPUs have significantly-enhanced memory throughput, and can use a combination of SIMD (with masked/predicated instructions), barrel processing and genuine multicore to excel at embarrassingly-parallel compute. But they're not magic.

And if what you care about is running a wide variety of mainstream stats or machine learning algorithms, the "specialized hardware" you mention is the modern GPU, or something very much like it - I don't see any low-hanging fruit that might make some other HW architecture more feasible. Fixed-point compute might get there someday, but it's way too fiddly at present - it's not really clear how to "tweak" mainstream learning algorithms so as to dispense with the large dynamic range that floating point compute provides.


I think creating an AI that beats humans in IQ-Tests is way easier than creating one that is smarter than a human.


> violation of the laws of entropy

why? If I try to reconstruct argument I can only come up with reasoning that would show also evolution violates entropy laws.


An AGI, connected to Micro-Electro-Mechanical switches, could learn to let hot air molecules into a box and cold air molecules out of that same box. But such an AGI would function as Maxwell's Demon, and would break the laws of thermodynamics if they didn't consume a certain amount of energy and give off a certain amount of waste heat in the process.

So intelligence = entropy production and energy consumption, and so it cannot grow out of control and transform the planet overnight.


That is only true if you also assume an adiabatic AGI. There is nothing "intelligent" about an optical thermometer linked to a gate.

The reason Maxwell's Demon cannot exist is that such a device would need to produce entropy


Yes, and any intelligent device would also need to produce entropy and release heat.

Which means that the level of intelligence an AGI can obtain is limited to Earth's energy input. And the impact that AGI has on the planet is also limited by Earth's natural processes.

Which means that a hypothetical paperclip maximizer that turns humanity into paperclips and that becomes too intelligent to be turned off is a ridiculous scenario that ignores the potential limitations of an AGI.


That limit is extremely high. Total solar input to the earth is about 10^17 watts. Humans require about 100 watts to produce 1 brainpower -- if AGI is similarly efficient it could produce 10^15 brainpower, or 10,000 brainpower for every human.

If 10,000 people's entire purpose in life was to turn you into paperclips, I don't think you could outrun them for long.


Well humans aren't solar powered; it takes thousands of tons of plant matter and hundreds of tons of animal matter in order to sustain each human. And it takes hundreds of millenia of experimentation with the real world in order to create a world like the one we currently inhabit. If humans harness 100% of the Earth's solar energy to produce a population 10,000 times its current size, it would still be unsustainable. Likewise, an AGI would need to harness that solar energy to produce silicon wafers and to manufacture chips; it wouldn't dedicate 100% of its energy to thinking.

We don't see evidence of life on other planets, and certainly not any evidence of superintelligence gone awry; a planet filled with paperclips is in an extremely low-entropy state. Evolution can happen because human beings, despite being low-entropy beings, turn low-entropy things like bananas and apples into high-entropy things like poop and carbon dioxide. This delicate process is likely why humans are relatively easy to kill, and why a superintelligent AGI would be unlikely to overpower mankind's dominant position on Earth.

So the assumption that a superintelligent computer program could suddenly harness 100% of the Earth's solar energy and outsmart humans to the point where it can't be turned off is still a highly unlikely prossibility. The arguments in favor of AI danger are easy to make but likely wrong, the arguments against AI danger are complex, require lots of thought, but are most likely correct.


To be honest I fully agree that in many senses AI will never be a catastrophic danger to humanity. I just do not see how entropy is part of that.


AGI Co-learning changes the dynamic significantly. If human beings could co-learn, then I think the analogy to AGI would hold a lot better.


“employment” or “usefulness” isn’t the threat.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: