Hacker News new | past | comments | ask | show | jobs | submit login
The Danger of Superhuman AI Is Not What You Think (noemamag.com)
41 points by max_ 52 days ago | hide | past | favorite | 85 comments



Superhuman AI won't be used to improve quality of life just like the day after someone figured out you could put ads on usenet

It's going to be used to pump as much money out of each individual as possible

Every online store, big corporate especially will instantly realize how much you are able and willing to pay for everything and that's your "personalized price"

It will scan all your social media, it will look at your purchase histories, it will know your income and credit

Like a used car dealer on steroids that never gets tired and learns more and more about you by following you around and watching what you do

Everything from your daily food to your big ticket purchases, maximum customized prices

It's going to be evil as hell and lawmakers will just be paid off eventually to let it all happen, data collection and no two people paying the same for the same thing will become 100% legal and the norm


For this to be true, people will also have to be prevented from using AI to combat the “cheap car salesman on steroids” AI bot with their own “eternal tire kicker cheapskate car buyer” AI bot. So not an argument about AI but who gets to use and benefit from it. That feels like a dictatorship level of control.


Good luck smuggling your rogue AI into the walled garden tended by the advertisers. Some of us will succeed, but the vast majority won't even recognize the need


I mean, right now you can sue any large company you want to "combat" them and their bad practices. Why aren't you? Oh yea, they typically have massive piles of resources split up in shell companies that make them very difficult and expensive to fight. And you, well you're a single person with limited resources and time.

Now, you can pool your resources together against the big AI using companies if you like, but that is going to cost a massive amount of money and time to fight them still. So humans typically like to put these resources in to a government that at least attempts to look out for their best interests.


Humans typically like to keep these resources out of the hands of government until something horrific happens then do so begrudgingly. Then half of them still insist it's a bad idea and run out to learn the hard way.

There is probably some kind of fallacy somewhere like "everyone thinks they will be the one who benefits from a power imbalance until they experience losing first hand".


Unfortunately jfyi won't be able to post any further after disbanding the EPA for their imbalance of power, became deathly ill after drinking chemically poisoned water from his well. It wasn't even the water that got him, it was was the trash eating bears. In his weakened state he could not get away. All this said, we respect him for his stance of "survival of the fittest". His passing was quite terrible and slow, as the police force was disbanded and the neighbor wanted a $35 cash deposit for ammunition to shoot the bear, it took 45 minutes for his screaming to stop.

https://www.vox.com/policy-and-politics/21534416/free-state-...

https://www.newyorker.com/humor/daily-shouts/l-p-d-libertari...


What a weird response. I'm not sure if you mistook me for standing behind a Libertarian viewpoint and are mocking me for it or if you are just joking. You know... "and then"ing what I said, taking it up a notch for the gag.

If it was the first, I can assure you the comment was lamenting the situation, not celebrating it. I even chuckled to myself that the fallacy in question is called "Libertarianism".

If it was the second, the best advice about joking I have ever heard is "if at all possible, involve a cow". I'll be looking out for it next time ;)


This seems to assume that the competitive marketplace will disappear, or that we'll have no ability to look at what a gallon of milk costs at Kroger vs. Aldi and make a decision on where to buy, or that prices will not be posted until you check out and everyone will just be OK with that.


You've seen grocery costs rocket up massively in the past few years and only now is it really reaching a boiling point. Turns out if you keep people entertained they will accept a massive amount of shit.

When it comes to food, there is far less competition than one would expect. The distribution markets have become ran by just a few companies. And those 'competitors' will gladly subscribe to some service like RealPage for food so they could massively increase their profits.


But if they get too greedy then local farming becomes competitive again. I can buy eggs from my neighbor, or at the local farmer's market, or I can get some chickens of my own.


Eh, not really. My sister has a large ranch and keeps chickens for eggs, but it's not something that's going to get even close to breaking even. A huge amount of time and effort is needed keeping predators away. Feed in small amounts is rather expensive compared to large wholesale amounts. And letting them range is a rather large amount of work ensuring they don't get in trouble.


This article really feels like so much nit picking. I hate these kind of semantic debates.

> How, I asked, does an AI system without the human capacity for conscious self-reflection, empathy or moral intelligence become superhuman merely by being a faster problem-solver? Aren’t we more than that? And doesn’t granting the label “superhuman” to machines that lack the most vital dimensions of humanity end up obscuring from our view the very things about being human that we care about?

This is such an annoying framing. There are massive assumptions throughout this, from "human capacity", "conscious self-reflection", "moral intelligence". And then throwing those assumptive definitions into another persons face, demanding how they relate to "superhuman" intelligence.

It's almost like the author is saying "I have intuitive definitions for this grab-bag of words that don't align with my intuitive definition of this one particular word you are using".

> characterizations of human beings as acting wisely, playfully, inventively, insightfully, meditatively, courageously, compassionately or justly are no more than poetic license

Again, the author has some intuitive sense of the definition of these words. If the author wants to get spiritual/supernatural/mystical about it all then they are free to go ahead and do so. They should come out and make the bold claim instead of masquerading their opinion as reasonable/rational discourse. They are more likely to find a sympathetic audience for that.


I used to know one philosophy student in my student years and this was exactly the beaf I had with him at every single discussion. I believe this is part of the training, dissecting sentences.


Absolutely agree with you. It is a polemic that is surprisingly irrational for a well trained philosopher.


Vallor performs her own bait-and-switch: characterizing many who do not share her view with a venal economic imperative. She writes:

>Once you have reduced the concept of human intelligence to what the markets will pay for…”

Bengio and Hinton certainly do not make that ridiculous reduction as a condition or criterion. Even Altman does not, although he certainly feels the pressure.

Vallor taught for many years at a preeminent Jesuit institution and when I read this polemic my first thought is—-please put your “belief system” cards on the table. What is the fundamental substrate on which you are thinking and building your arguments? Would you be comfortable with Daniel Dennett’s thoughts on brain function and consciousness or do you suspect that there must be some essential and almost ineffable light that illuminates the human mind from above or below.

My cards on the table: I am an atheist and I am not looking for another false center for my universe. The hard problem of AGI is now the embodiment problem and iterative self-teaching and self-control of action and attention.

You could say that we need to get the animal into an AI now that we have language reasonably well embedded. From that point onward, and I think quite soon (10 years), it will be AGI.

Perhaps “super” is not the right adjective but it grabs attention effectively.


She talks about "The struggle against this reductive and cynical ideology..." when talking about humans in factories, but falls on the same weak argument by broad stroking AI as "mindless mechanical remixers and regurgitators".


That sounds like a fair description to me - what did she miss?


I think the weak arguments are in the text surrounding those quotes.

Like for the first one she has

>Indeed, for the entirety of the Industrial Age, those invested in the maximally efficient extraction of productive outputs from human bodies have been trying to get us to view ourselves — and more importantly one another — as flawed, inefficient, fungible machines destined to be disposed of as soon as our output rate slips below an expected peak or the moment...

Which is a bit strawman. I mean who, as a real person does that?

The second one she's saying why have AI do art when it should be doing my taxes, but people are probably working on doing taxes, it's just the art stuff happens to work well, LLMs filing your tax less so. Although I can see an interesting argument that those false tax deductions were hallucinated by the LLM, not my fault.


The author and Yoshua Bengio talked past one another.

Ms. Vallor is too concerned over semantics, too fixated on the current crop of LLMs, too invested in "pinning down" Mr. Bengio.

Five to ten years out though I suspect we will no longer be having arguments over what it means to be human, intelligent.


> Five to ten years out though I suspect we will no longer be having arguments over what it means to be human, intelligent.

That seems highly unlikely. We may have more information about the nature of intelligence, but I highly doubt we will have all the answers. As to arguments about "what it means to be human"... that's one of the core metaphysical questions that I doubt we will ever be done arguing about.


We won't have all the answers? I agree with that.


“What if, instead of replacing humane vocations in media, design and the arts with mindless mechanical remixers and regurgitators of culture like ChatGPT, we asked AI developers to help us with the most meaningless tasks in our lives?”

This is the question I always ask. I'll be interested in AI when it can do my laundry and my dishes. I guess self-driving cars might fit in here, although I personally like driving, some people hate it and even I could see how an AI driver could be desirable at times (e.g. coming home from the bars).


I already have machines doing my laundry and dishes. They run a very simple program, no AI needed. I guess what you mean is that you need a robot butler that takes care of the remaining tasks of loading and unloading them, and folding the laundry and putting it away.


Embodied AI is far too expensive. Better to talk to a chat bot about why you feel those tasks are meaningless, and hope it will guide into the nirvana of the mundane.


Now, but not in 50 years.


There's this incredible denial about the fact that art, which was only ever a vocation for a lucky few, is easier to automate than physical tasks that 99% of us can do. In a way I understand it from the point of artists -- what if the thing that's most human happens to be the thing that I am better at than other humans -- but it's not how things worked out.


So many arguments along these lines run afoul of a very simple rule: You can't make predictions about the future by playing word games!

An AI system can only be dangerous or not dangerous based on what physical events happen in the real world, as a result of that AI interacting with the world via known physical laws. In order to decide whether a system is or isn't dangerous you need to have a predictive model of what that system will actually do in various hypothetical situations. You can't just say "because it is improperly defined as X it cannot do Y".

If you want to predict what happens when you put too much plutonium into a small volume, you can't make any progress on this problem by talking about whether the device is truly a "bomb", or by saying that the whole question is just a rehash of the promethean myth, or that you cannot measure explosions on a single axis. The only thing that will do is to have a reliable model of the behavior of the system.

Many people seem to either not understand, or intentionally obfuscate, that an AI, like a bomb, is also a physical system which interacts with the physical world. Marc Andreessen makes this error when he says AI is "just math" and therefore AIs are inherently safe. No, it's not just math, it's a computer, made of matter, that physically interacts with both human brains and other computer systems, and therefore by extension with the entire rest of the physical world. Now of course, the way that an AI interacts with the world is radically different than the way a bomb interacts with the world, and we cannot usefully model the behavior of the AI with nuclear physics, but the fact remains. (See also: https://www.youtube.com/watch?v=kBfRG5GSnhE)

So when you see arguments like this, ask: Is the person making an argument based on a model, or not? Examples of model based arguments include things like:

- "AI risk is not a concern because of computational intractability" - we understand something about the limits of computation, and possibly those limits might constrain what AIs can do.

- "AI risk is a concern because less intelligent entities are not usually able to constrain the behavior of more intelligent entities" - A coarse and imperfect model indeed, but certainly a model based on observations of interactions in the real world.

"Verbal Trick" arguments include things like:

- "AI risk is not a concern because we can't even define intelligence"

- "AI is just math"

- "AI risk is not a concern because intelligence is multi-dimensional"

A third category to watch out for is the misapplied model:

- "AI risk is not a concern because people have always been worried about the apocalypse" - this is a model based argument, but it can only answer the question of why people are worried, it cannot answer the question of whether there is in fact something to be worried about.


Strongly agree. I feel this is the recurring problem of armchair philosophers mistakenly thinking they have something to contribute to scientific or political questions.

Even among the folks that debate this rigorously, another related problem is “reference class tennis”, as you note above, what is the appropriate anchor for your priors? Eg is intelligence going to compound to a runaway reaction like nuclear fission (since more intelligence speeds up intelligence gains in AI) or is it going to asymptote like the current trend in scientific progress?


> "AI risk is a concern because less intelligent entities are not usually able to constrain the behavior of more intelligent entities" - A coarse and imperfect model indeed, but certainly a model based on observations of interactions in the real world.

I'd argue that the observations are inconclusive. Human dominion over nature is a holistic thing that involves more than just our intelligence -- the particular shape and capabilities of the human body (opposable thumbs, upright posture, stamina) certainly are an important enabling factor without which our intelligence would probably have been pissing in the wind. Within human societies, I see little evidence that more intelligent humans can effectively constrain the behavior of less intelligent humans. Who can manipulate who seems to be more correlated with personality and instinct than intelligence per se, and it's quite failure prone. There are also classes of powerful non-intelligent systems such as viruses and grey goo that intelligence could foil, but not necessarily: it is perfectly possible, I dare say plausible, that there exists a class of non-intelligent systems that no physically realizable intelligence can control.

I feel that we overestimate the role of intelligence in our success, as well as its scope (more precisely, I think we ignore or underplay the specific circumstances that make it effective). I mean, intelligence isn't magic, it's a system that attempts to model the world and plan how to get from the current world state to some desired state. It stands to reason that intelligence isn't going to scale well with degrees of freedom, regardless of how it is implemented, because the size of the search space explodes as complexity increases. In order to work properly, intelligence needs a world simple enough to model reliably. If it isn't simple enough, simplify it by force. If that can't be done, tough luck.

I would argue that human society, as a system, actually is intractable for human intelligence, which is why smart people are so bad at understanding it, let alone steering it. Now, would it be tractable for AI? I have my doubts, but what worries me here isn't the level of intelligence. After all, we don't dominate nature because we understand it very well. We dominate it through brute force. We have the minimum level of intelligence to create systems that can overwhelm natural processes, which turned out to require comparatively very, very little complexity. That's kind of lucky. I don't know if AI would have that luck -- in any case, it is best to proceed cautiously.


> it is perfectly possible, I dare say plausible, that there exists a class of non-intelligent systems that no physically realizable intelligence can control

I am not sure what this means. It's almost certain that no physically realizable intelligence could compute exactly when to have a butterfly flap its wings in order to cause a hurricane next month, so the weather may be "uncontrollable", but so what? That probably won't be a binding constraint on any agent's ability to achieve its goals.


Yes, sorry, that was a bit vague. What I mean is that there are probably classes of systems that consume resources too efficiently for intelligent agents to compete. Because an intelligent agent aims to "achieve goals," the space of actions they can undertake is constrained: if they need to do A in order to achieve goal B, they must do A in such a way that it does not undermine B. If they need to build paperclips to bind documents, it would be counterproductive to start an uncontrollable reaction that transforms all matter in the universe, including documents, into paperclips.

So, insofar that intelligent agents must self-limit to controllable methods to extract resources to achieve their goals, it is quite possible that the optimal resource extraction system lies outside of that set. The "thing" that is the very best at consuming the resources that an intelligent agent wishes to consume may be something they would never dare to create, and if it was created, either through hubris or happenstance, the agent would be unable to control it and would therefore be unable to achieve their goals.


> "AI risk is a concern because less intelligent entities are not usually able to constrain the behavior of more intelligent entities" - A coarse and imperfect model indeed, but certainly a model based on observations of interactions in the real world.

Donald Trump was the president of the United States. FDR was a very smart man but Oppenheimer was smarter and FDR had him build the atomic bomb. This is just wrong, and ChatGPT is nowhere near as intelligent than a five year old (intelligence is not knowing facts)


What is your definition of intelligence then?

ChatGPT is vastly more intelligent than most humans on any measure I can think of that existed before 2 years ago when suddenly people started trying to conflate intelligence with sentience.


I love this game, I'll give one.

Intelligence: the ability and scale of capability to navigate a non-deterministic system.

I feel that doesn't help though.


I think this is a very bad refutation, but the point of my post is not to advance a particular argument, but to sort out what types of arguments have any hope of settling the matter at all.


Could we not one day come up with an AI that is more empathetic than the average human?


In some dimensions, the current crop can achieve this. See the Google medical AI that gets scored better for bedside manner than MDs.

It’s not what we would have predicted pre-GPT, but I think it’s plausible that LLMs will be superhuman in empathy/persuasion before they are in IQ.

I think you can model empathy as “next token prediction” across a set of emotional states and responses, and that could end up being easier for Transformers than the abstract logical thinking required for IQ 200.


I think "what do I mean by empathy and what will I use it for" are the key points to nail down before creating something that just needs to print "wow, that sucks" or "I told you that bitch crazy". I'd expect this kind of token prediction to be an alternative to certain types of maintenance therapy, and to fit on a watch in the next few years.

The problem with wanting e.g. an "empathetic salesperson" is that your successful role models don't work for shitty companies selling shitty products.


Is empathy really a requirement for bedside manner? It's perception of empathy more than anything.

I'd bet it would be interesting to see the rate of occurrence of aspd in jobs requiring bedside manner.


Yes. People will say it's not "really" feeling anything, but I expect AIs to become better at "faking" humanity than real humans. This could be dangerous in its own way.


At what point does it cross the line of faking it extremely well and doing it though? The Measure of a Man



There is no artificial intelligence. These models aren't sentient. However, sentience isn't required to be destructive. We have a hard time fighting virii and bacteria. Even something comparatively simple like MS blaster did a fair amount of damage. If someone figures out a good way weaponize the technology, we'll all have some bad days.


Why does intelligence require sentience?


The ability to apply knowledge requires understanding. These models don't have the ability to comprehend why something is, instead they just parrot words based on their likelihood within the context. True comprehension requires awareness.


Non-human systems created by humans (social entities like corporations or governments) have a certain kind of decentralized intelligence, one that isn't localized in any one person's mind. Insects and perhaps even plants show adaptive learning behaviour that is often called intelligent.

Intelligence without understanding. I used to believe this was a philosophical impossibility, but I'm strongly starting to think otherwise. This is such weedy territory but, perhaps, all living things are intelligent - a bacterium is the application of knowledge (in the form of genes holding biochemical blueprints) to the task of staying alive.

John Walker (of Autodesk) got into the philosophy of information theory in his later years and he put it better than I could:

> We sense that computers are, if not completely alive, not entirely dead either, and it's their digital storage which ultimately creates this perception. Without digital storage, you can't have life. With digital storage, you don't exactly have a rock any more.

- "Computation, Memory, Nature, and Life" https://www.fourmilab.ch/documents/comp_mem_nat_life/


Corporations and governments aren't intelligent, they are not even tangible, just concepts. It's the people operating them that are intelligent. Plants and insects are interesting. I don't know if the behavior there is a result of sentience or if it just resembles intelligence, but both ideas are cool to think about.


Organizations are greater than the sum of their parts. To extend the insect metaphor, an ant colony has complex emergent and intelligent behaviour, far more complicated than any individual ant could be responsible for. For example when invaded, a collaborative and decentralized response from the warrior ants arises, not guided by any individual. Swarm/flock behaviour. I would argue those behaviours sometimes qualify as intelligent. And I would suggest that human societies are like this too.


"sentience" has nothing directly to do with knowledge or understanding. The word is often confused with "sapience" which does actually have that meaning.

> True comprehension requires awareness.

This is unknown. A link between sapience and sentience is plausible but not something we have proof for one way or another.


Based on my experience as a sentient being, I believe sentience is a foundational requirement for understanding. The lack of a scientific or mathematical proof doesn't make both options correct, it's either one way or the other.


> The lack of a scientific or mathematical proof doesn't make both options correct, it's either one way or the other.

A plausible assumption is still an assumption. "Sentience" seems to be ubiquitous in the intelligent systems we've seen, but those have all been biological systems that evolved from common ancestry. While sentience does seem foundational to our sapience, I think it is unwise to take as given that this will hold true in non-biological systems.

You say "it's either one way or the other", but in my experience the correct answer to such questions are often "it's complicated." Sentience may an intelligence accelerant that makes the development of sapient systems easier, but isn't actually a necessary property. It may be that sentience is required for sapience, but only if you restrict or stretch the definitions of those terms in specific ways.

Is human culture/society sapient or sentient, either in whole or in parts? I can see an argument for saying that human culture as a whole is sapient with being sentient, depending on how you defines thoae terms.


>doesn't make both options correct, it's either one way or the other.

This is kind of like saying the only way to fly is by flapping wings. Balloons, fixed wing aircraft, and rockets would disagree with you.

The particular problem you're having in this thread is you're making very strong statements on things that have been neither proven or disproven and we don't have too much more to say than "there is something electrical and chemical occurring there in the brain, but who knows what exactly"


”they just parrot words based on their likelihood within the context”

That describes how the vast majority of humanity functions.

They live within a meme complex, where they have propagator memes, protector memes, and final boss memes, and they spend their entire lives lobbing memes back and forth over the fence.


I assume you don't count yourself as one the vast majority of humans. If not, your comment reads as very arrogant and condescending.


Sorry to disappoint you, but I’m a mess like everyone else.

Just as with cognitive bias, awareness of the phenomenon does not preclude one from falling prey to it.


Some of this topic was addressed ny Peter Watts in Blindsight.


"Once you have reduced the concept of human intelligence to what the markets will pay for, then suddenly, all it takes to build an intelligent machine — even a superhuman one — is to make something that generates economically valuable outputs at a rate and average quality that exceeds your own economic output. Anything else is irrelevant."

That's capitalism.

Most of the current criticisms of AI can be leveled at corporations. A corporation is a slow AI. It's capable of doing things its employees cannot do alone. Its goal is to optimize some simple metrics. The Milton Friedman position on corporations is that they have no duties other than to maximize shareholder value.

What has the chattering classes freaked out about AI is that it may make possible corporations that don't need them. The Ivy League could go the way of the vocational high school.


Putting aside for the moment whether that's possible, it's a correct thing to be freaked out by. How do we treat people who we don't have an economic need for? Looking into that chasm for the first time we would expect people to feel terror. Not everyone will be able to escape the horrors we keep down there.


[flagged]


Imagine what HN would look like if we felt the tiniest bit of solidarity towards other workers.


As the ideology behind this bait-and-switch leaks into the wider culture, it slowly corrodes our own self-understanding. If you try to point out, in a large lecture or online forum on AI, that ChatGPT does not experience and cannot think about the things that correspond to the words and sentences it produces — that it is only a mathematical generator of expected language patterns — chances are that someone will respond, in a completely serious manner: “But so are we.”

And a very logical next step after this neat and tidy dehumanization is, as history has shown, the gas chambers for the obviously malfunctioning “machines”. Because whatever, it’s not like they’re actually feeling anything, right?

(edit)

When you reduce persons to the status of things, you are going to get people treated like things.

And every single time, that’s a recipe for disaster.


So you’re saying that the people who are yelling at the top of their lungs that we should stop AI because it will kill us all are doing so as a ruse to…kill people? That’s some seven-dimensional chess they’re playing.


I think OP was criticizing the hard stance on the impossibility of conscious AI.


I’m saying this: if you reduce persons to the status of things, you are going to get people treated like things.

And every single time that’s a recipe for disaster.


i’m not saying that.


Among the more mystically inclined there is a new metaphysic of consciousness that is emerging. It is based on the Bicameral Mentality [1] posited by Jaynes in 1976. It is strongly related to the System 1/System 2 theory of the psychologist Kahneman in his 2011 book Thinking, Fast and Slow [2]. Ian McGilchrist has spilled a lot of ink trying to formalize this idea including in his 2009 work The Master and His Emissary.

The 10,000ft view of the argument is that the human brain has two simultaneously running processes. One process is interested in fine details, analytic and critical thinking, reductionism, language, intelligence. The other process is interested in high-level details, connections between elements, holistic, symbolism, intuition. The argument further assumes that the systems are (to some degree) independent and cannot be built upon each other. These faculties are often ascribed to the left/right brain.

If you subscribe to this idea (and this is a big if which you have to grant for the purpose of this argument), then you could argue that LLMs only capture System 2/analytical/reductionist processes of the brain. In that case, you could claim that "superhuman" is an incorrect way to describe their abilities since it only captures 1 aspect of the bicameral mind.

However, this argument is inappropriate for discussions specifically on the topic of AI safety. The most basic response would be to point out that LLMs have or will surpass human capability in System 2 type thinking and thereby deserve the description "superhuman intelligence".

This article seems to smuggle in this bicameral distinction as if it is universally agreed upon fact. The author seems to be demanding that Bengio concede to this framing as a basis for any discussion the author wants to have.

1. https://en.wikipedia.org/wiki/Bicameral_mentality

2. https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow

3. https://en.wikipedia.org/wiki/The_Master_and_His_Emissary


Funny, in my discussions with my friends we all intuitively agree that LLMs are great (I'd say stronger than me in most subjects including my profession of 18 year, programming) at system 1, but can't system 2 for th life of them (except for AlphaProof which came out last week and showed that a very simple, very expensive solution Just Works).


I would say that the fact that LLMs are language models is what would make one argue that they are left brain (or whatever constellation of traits are associated with that half of the split). In general, the other side is considered non-language oriented (e.g. symbolic, emotional, intuitive). I doubt you or your friends would agree that LLMs are more intuitive or emotional than you.

I think the old association with "creativity" with the right brain might be misleading. For example, for some people there is a huge difference between being able to vomit out rhyming couplets in any language and being able to write a poem like Homer's ‎Odyssey. Another distinction one could use is that between "craft" and "art". It isn't that LLMs can't be skillful at manipulating words into pretty patterns, it's that the patterns ultimately have no connection to a larger narrative (unless explicitly directed by human instruction).

At any rate, I would find it hard to agree with anyone who suggested that LLMs show any kind of "intuition" but I'd be interested if you can give concrete examples from your own use.


I don't understand System 1/2 as related to left/right, more to immediate answers vs planning, or say policy vs tree search in AlphaGo.

When trying to solve a leetcode problem, I will "come up" (this step is atomic) with a possible solution, then think it through and figure out its weaknesses and come up with an amended solution. GPT4 beats me at the first part, but sucks at the second, even when specifically instructed to examine the wrong output its code gave and reflect on it.


I agree that System 1/2 doesn't map directly onto the right/left brain.

One of McGilchrist's points is that our society has almost completely turned to relying on analytical and critical thinking. As a programmer, for example, we are often taught to quantify our problems (e.g. gather metrics), and then use reason/logic/rationality to arrive at efficient solutions.

For that reason, I think it is hard for us to even understand what is being talked about by "right brain" kind of activities. One example that is helpful is to consider walking down a dark street at night. All of a sudden you feel uneasy and you become alert and you start to carefully examine your surroundings. It is that "feeling of unease" that is being pointed to, the sense of something being brought into your awareness. Another example is how an artist, for example a painter, will stand back and look at a painting in progress and decide "is this done?".

If you are at the stage of "solve a leetcode problem" you are already 100% in left brain territory. Perhaps consider moments as a programmer where you get a feeling "something here isn't right". I don't mean, some test is obviously failing, I mean you suddenly feel some aspect of a situation is "off" and you realize you need to direct your attention towards it without any external warning. Then consider: could an LLM have that same kind of intuition?


I don't think this is different than me looking at a leetcode problem and thinking "this has to be a binary tree of subproducts" (totally wrong solution in that case). Which is why I think it's the only kind of thinking LLMs are good at.

Remember how we used to ask GPT3.5 "how much is 42*35?" and it would reply "The answer is 1305. Lets do the math:" and then come up with a calculation with enough mistakes to be able to arrive at that wrong final solution it guessed at first? That first guess is exactly System 1.


Her argument, which is a bit vague, kind of seems to be that at the moment there's a problem with seeing

"humans are no more than mechanical generators of economically valuable outputs"

rather than

"humane, nonmechanical, noneconomic standards for the treatment and valuation of human beings — standards like dignity, justice, autonomy and respect"

and that 'superhuman' AI will make it worse. But that seems unproven - things could get better also.


Superhuman doesn't mean divine. It can be superhuman in its ability to control and oppress. The danger is that AI will be devilishly creative and wrap the majority of the humanity with a logically perfect ideology that equates humans with machines, and once they fully internalize that ideology, they'll complete spiritual self-destruction. And those few wise enough to find holes in the AI ideology will be quickly isolated.


I guess I can sigh in relief, the Superhuman AI that Joshua Bengio and Geoffrey Hinton are warning us might drive humanity towards extinction the way we did to most apes and many tribal civilizations is no danger after all, because you see, to be Human is not about winning at war or economy, not about being good at solving a vast array of tasks, it's about playing with your kids.

So the better-at-problemsolving machines aren't coming to destroy everything that is dear to me because I... used the wrong word to refer to them?

I mean, sure, there are arguments to make against Bengio and Hinton, definitely against Yudkowsky, I myself only give less than 50% that they are right and doom is imminent (which is of course enough risk to warrant taking action to prevent it, even the kind of "crazy" action Yudkowsky advocates for), but this "argument"... what the heck did I just read.


Worse, the author completely gets the fundamental precept wrong by claiming "superhuman means human, but more so" when in fact it means "above human." So the whole article is just barking up the wrong tree.

The real reason the better-at-problemsolving machines aren't coming to destroy everything that is dear to you is because they don't exist and we're not close to making them, but this article misses that basic truth in the pursuit of a misunderstanding of the prefix "super".


Completely misses the point and focuses on labels and definitions which is irrelevant. The danger in AI will almost certainly come from a direction noone was expecting...a kid constructs a lethal virus, a military event is triggered with catastrophic consequences etc...


Or it will come from exactly where we expect it "Greedy rich people use AI to get even richer at the cost of _____"


I see real threats in this. It resembles other 'questionable' bargains that we humans have earlier given in to. We have come to accept that we must work and produce capital to be allowed to live, currently partly to enrich various billionaires (the wealth we produce already allow us to buy dozens of big TVs we don't need, and we have shops full of junk we don't need, clearly we produce more than we need to live, for questionable reasons). We can no longer live as stone age hunters (I'm not saying that is a better fate, but we no longer have that choice). It is a very real threat that other "things you will have to accept" will arrive on the ship of AI. Sigh.


> work and produce capital to be allowed to live

You have to work to live. Whether that's at a desk job or as a hunter gatherer. This has nothing to do with an "allowance" by any other entity. It's entropy itself which requires this of you.

> We can no longer live as stone age hunters

I personally know plenty of people who do. You're confusing your exceptionally high standard of life with the very small requirements to keep yourself alive. The gulf between the two is so wide I think you've lost track of it. I recommend a good long camping trip.

> It is a very real threat

One of the functions of freedom and capital are to increase individualism. Viewed from this place I'm sure AI seems like a real threat. If it threatens our freedom and capital I think you'd be surprised at how humans may group together to collectively defeat this "threat" which is almost certainly decades if not centuries away.

Your brain is an analog computer. In fact it's several dozen analog computers operating independently. It's also so energy efficient it's actually kind of scary. That's a deep "super power" we have that people aren't genuinely considering in these faux-AI-philosophical debates.


> I personally know plenty of people who do. You're confusing your exceptionally high standard of life with the very small requirements to keep yourself alive. The gulf between the two is so wide I think you've lost track of it. I recommend a good long camping trip.

It was very likely meant that "we as a whole" can no longer live as stone age hunters. I might argue that is wrong too though, but only if we were to take into consideration the very harsh lessons we would learn along the way.


>which is almost certainly decades if not centuries away.

If you have no solution for a problem, it always pays to say the problem is a long way away.

>I personally know plenty of people who do.

If you think like an individual, you'll be killed by a system you don't understand. We can no longer live as stone age hunters. If everyone decided to at once around 98% of the population would quickly die from lack of water, food, and medicine/anti-biotics. There were only a few to a hundred million stone age hunters because earth cannot support high density humans with out some level of cleanness technologies.

> It's also so energy efficient it's actually kind of scary

And it's also its greatest limitation. Yea, your brain uses 40 watts or so, but we can also generate gigawatts of power on demand and it doesn't do our brains a bit of good. Meanwhile I can ship that power off to a data center that, theoretically, could use all that power at once far beyond what any individual could ever use.


There’s a lot of good stuff in this article, but it nevertheless misses the point (as did Yoshua Bengio as described, to be clear).

When alarmists like myself take about danger, we rarely use the term “superhuman intelligence”. We tend to use the term “superintelligence”. And the reason is that our working definition of an intelligent agent contains agents both vastly broader and vastly narrower than humans, but that nevertheless are a danger to humans.

So the question isn’t “but does this agent truly understand the poetry it wrote”. It’s “can it use its world model to produce a string of words that will cause humans to harm themselves”. It’s not “does it appreciate a Georgia O’Keefe painting.” It’s “can it manipulate matter to create viruses that will kill all present and future O’Keefes”.


“Can its world model BE USED to produce…” “Can it BE USED to manipulate…”

FTFY


No, you didn’t. I meant what I wrote. If it is only a tool, obviously our problems are human. The question is whether the AI is setting non-human-desired goals.


This is just another framing of the problem as ai vs humanity. It comes off preachy and frankly misses the boat as much as claiming ai is sentient.

The problem is humanity vs malicious humanity with absurd access to resources.

AI doesn't need to think or be conscious or be superhuman to be a problem. It just needs to allow a small set of people to get away with superhuman things.


The incoherent ramblings of LLMs often remind me of children or people with neurodegenerative diseases. If we want to preserve our humanity as this article suggests I don't think we can take it for granted that current AIs lack the same building blocks as our consciousness. The author rails against the definition of "can perform economically valuable tasks" as well as the average human, but I feel like this is a direct response to the author's refusal to consider that LLM's might be feeling. If we want to keep our humanity we have to be open to the possibility that if it can eloquently describe a strong emotion, it actually might be feeling that emotion and not simply "predicting the next token." But the author seems to think that it is obvious fact that state-of-the-art LLMs do not feel anything, and I'm not sure it's even a falsifiable question.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: