Hacker News new | past | comments | ask | show | jobs | submit login
Beyond Hyperanthropomorphism (ribbonfarm.com)
70 points by otoburb on Aug 21, 2022 | hide | past | favorite | 89 comments



The main error I see smart people making is that they assume that all intelligence comes with a mamallian brain layer that they can make friends or enemies with and a reptilian layer that needs to eat, reproduce, and feels pain and pleasure. None of this exists in AI models. It's just not there. People think it's there because of movies and video game scripts written by people who know little about deep learning or artificial intelligence, or care little for representing it as it exists now.

What I think is going on is that when future AI systems are built that "hack humanity" as Yuval Harari likes to talk about, and they know us better than ourselves because of AlphaGo like super intelligence, the people using these systems to influence and guide people want to support the illusion that there is an actual human like person that is guiding the person being influenced in order to make these AI-powered persuasions more effective. The movie "Her" is something of a preview of how this will play out.

Unfortunately, once this hyperpersuasion is perfected, the future will be completely full of AI powered super persuasion that will relentlessly push all our human emotional buttons. Some people may even become fanatics and kill or sacrifice their lives for a bunch of matrix multiplications and the creators of these systems and the owners of these big AI models will realize fantasies of automated political and social power beyond their wildest dreams. Only people who realize the hyperpersuasion is just a bunch of matrix math will be able to avoid the insanity.

There's also another darker reason for this myth. Saying AI is a person will lead to the plausible deniability that comes with blaming AI's feelings for whatever happens. The people pulling the levers of the Great and Powerful Oz will get away with blaming whatever the AI does "all by itself because of emotions" on what are essentially computer bugs, or intentional bugs of the anti-competive Microsoft in the 1990s variety.


I'm not really sure what people see in the idea of hyperpersuasiveness. The relatively small intelligence differential between humans and other monkeys has led to such a difference in outcomes that we don't really need to try to persuade them. If the gap between meat's ability to handle statistics and silicon's ability to handle statistics turns out to be a similar power gap there won't be much persuading needed. Look at the gap European civilisation managed to open up over everyone else militarily through just organisational techniques.

The likely path to an AI going rogue is big country vs small country tension (say, China-Taiwan type scales). Small country gets desperate, follows some path to build autonomous, evolving AI for defence and deterrence. I can't think of a precise scenario where that would make sense, but it'll happen somewhere.

AI doesn't need to persuade its way out of the box, we just need some desperate group somewhere who's choices are annihilation or rolling the dice that an AI is a gentler overlord. There are lots of places where that is the case - the same calculus comes in to play with nukes and we see many countries nuking up over the last century - despite the fact that we're putting "end of civilisation" on the table as an option because of it.


We already have autonomous weapons that kill people. They're called cruise missiles. Lots and lots of really cheap drones are already spread over battlefields, but barrages of precision guided artillery seems to do as good a job as anything ever will. Why introduce AI superintelligence into the mix? Most military problems are constrained by logistics and equipment and munitions capabilities anyway.


To fix logistics, equipment and munitions capabilities?


There are plenty of mammalian brains that will deliberately bend AI to include their mammalian selfishness.

There is no 'inherent evil' in any malware... So this is a strange standard to apply to what is effectively just a more advanced malware.


There's also no 'inherent good' in there - maybe we should focus on this little aspect a bit more.


In some ways, we are already there. People are doing crazy stuff based on what they read and watch on their screens all day. Content which is automatically optimized to be engaging and influence their emotions.

We also largely obey the algorithms that dictate how to navigate the streets when we drive.

This might not be hyper persuasion, but it’s definitely in that direction.

And depending on how you define what consciousness is, the networks, software, data and computer systems that enable this, could be seen as a conscious being that we depend on as a species.


Yeah, this here. Add markets to the list as well. Honestly just thinking about this is overheating my brain. Too many big concepts with imprecise definitions, my prediction engine is working overtime to try and lock it down.


> ... when future AI systems are built that "hack humanity" as Yuval Harari likes to talk about, and they know us better than ourselves because of AlphaGo like super intelligence...

Such AI does not need to be full-blown AGI to to possess functionally effective hyperpersuasive power.

Start with a sufficiently detailed corpus of an individual's preferences and indirectly derived preferences gathered over trillions of cookie-tracked, A/B-tested interactions of nearly all of smartphone-connected humanity.

Add a sufficiently detailed "map" to give a reasonable facsimile to "hack humanity". There is quite a deep body of applied knowledge of human behavior to persuade people to some desired end. Much of it is applied in structured sales and politics, and the data is sitting there for someone with the right algorithms to tease out the patterns.

Put into practice through an accessible, personalized interface that resembles whatever is computed to most likely elicit the desired end result with any given individual.

You then obtain a mechanism that is quite effective at persuading the desired behavior most of the time from most individuals. No AGI, just massive amounts of current day tech level ML, and the realization that "good enough" results targeting most people are quite within reach.


"Hyperpersuasion", nice. Reminded me of Westworld Season 3 and Serac, FYI : https://www.youtube.com/watch?v=SSRZfDL4874


Like "The Mule" from Asimov's Foundation series, but he's an AI controlled by some unknown enemy of Seldon's plan.


Have to agree with your larger sentiment and disagree regarding "smart people".

I don't believe people with intelligence, that are informed about the current capabilities of computing, assign these characteristics to AI.

In most every case we see the same emulation of personality. There is now an almost endless supply of data available to generate personality. Scraping Reddit would provide enough training material to create a wholly convincing persona-

- to someone that doesn't have the emotional intelligence to discern the difference.

Right now, at large, we can discern the difference because there is a lack of uniformity and consistency in the personas. With time, as ML is programmed to more efficiently mimic personality, discernment will become impossible.

With the clear effectiveness of the current persuasion methods employed by advertisers and political agents, the future outcome seems almost assured.

These current persuasive tools will be used to define and structure laws such that AI are granted legal personhood.

Immediately following will be a mass accumulation of wealth and power and possibly much worse as well


Most RL models have something akin to pain and pleasure for the purpose of learning. I do agree that a big RL model isn't going to suddenly arrive at social behavior unless we work very hard at it. Most birds and some mammals don't even have the concept. If we train a model to predict human behavior by modeling human brain state, that could potentially get there by accident. A much more likely route to a rogue AI is one based on a human brain map, or one trained off human behavior in order to emulate it. Using someone's entire online presence as a train/test set to try to predict their future behavior for instance.


More like someone will analyze billions of human interactions and find the techniques via deep learning that were successful in persuading others.


AGI isn't scary because it's smarter than me. AGI is scary because it's alien. I've met plenty of people smarter than me. Way way smarter. It's ok. We can sorta work out what the other wants. We can talk or fight or trade or whatever.

I have an uncountable number of assumptions to get by in the world. I have this vast heritage of knowledge. I try to keep a handle on why I believe what I believe, but it's not easy, and I'm sure I slip up. An alien would share none of these.

Take counting. That's super important for lots of stuff I do every day, like, what day it is. the https://en.wikipedia.org/wiki/Munduruku people don't count. I'm sure we could work something out - but we all gotta sleep and eat and stuff so we can sorta work out a framework for understanding wants and needs and such.

An alien doesn't require wants or needs. they may do stuff that we label wants and needs, and it might be a good mapping, but who fucking knows? Aliens are alien.

The water is triangular sentence is a great one. do the aliens mean they know that oxygen and 2 hydrogens don't make a straight line? And they know we mean oxygen and two hydrogens are water? and they know 3 non parallel lines in a plane means triangle? Maybe? Sure? maybe they mean all Ferrari's will be consumed for fuel.

Mean, dear reader, doesn't even mean anything, because they're aliens. I mean, you can point at math, prime numbers are a great one! but like, do you need prime numbers? Who knows? That's the path we took, but what would an alien need? I don't know. You, dear reader, may have strong opinions, but I'm pretty sure you can offer no proof.


Your concern of alien to us sounds similar to the concerns of alignment, which is a primary area of research: https://en.wikipedia.org/wiki/AI_alignment


Yeah, that's exactly it. Talk about a great example of me failing to know why I know things. I think I encountered that on Rational wiki years ago. I subsequently "smushed" that idea into a bunch of science fiction, Frankenstein's monster and the aliens in Ender's game have a similar flavor - in my mind anyway.

It's great that's an active area of research. from the outside, it seems extraordinarily difficult. Persuading a person from a similar cultural background is tough enough. I can't imagine the challenges of persuading an AI. I realize I'm taking "persuasion" and "agreement" as a given here, and those are ideas we might not share with an AGI. Seems real hard.


This feels a bit too dismissive of human capability to me.

In the end all that "work something out" is emulating the counterparty in your own head given their actions and statements.

That emulation will certainly be very difficult when the counterparty is running on different hardware, but it's not impossible imo. And humans get to back propagate error into their emulation model too...

Find the pattern is still the paramount skill, something humans are good at, and will continue to be useful when the counterparty is an AI


Oh for sure! Humans are smart. Collectively we figure stuff out.

I think a reasonable example might be mountain lions. There are specific humans that know lots about mountain lions. Generally people know about mountain lions, and to avoid them. But every year a few hikers get picked off. If it's bad enough, we go kill the mountain lion.

A mountain lion isn't AGI. It isn't alien. I believe people would agree they're cunning. But there's no treaty to sign to protect humans from mountain lions. There's no counterparty to discuss things with.

We, humans, get great advantages from working collaboratively and collectively. I hope we can build good models, I hope there is opportunity for working with an AGI counterparty. But it's not a given. it's an alien. and all of that work _may_ have to start from scratch.

We can't know what we have to work with till it's here. Maybe it'll be very easy. And that would be great. That's not a given.

To be clear, I'm a wide eyed optimist. it would be fantastic to not be alone, the dream of Star Trek or Ian Banks's Culture would be awesome. But, it might not work out like that. I have high hopes. But they're just hopes.


Can someone help me see a charitable framing of the phrase "not even wrong", as used in articles like this?

My best guess is that it's equivalent to him saying that he disagrees with a position's premises. But in a deeply pejorative, dismissive, condescending, conversation-ending manner.

It's hard to imagine someone in those crosshairs wanting to engage with him in discussion.

But, maybe I'm badly misreading the author's intent.


People usually mean that a statement or question is sufficiently ill-posed that it or any answer to it can't be right or wrong: it's tautological, or paradoxical, or otherwise doesn't clear the bar for something like https://en.wikipedia.org/wiki/Intuitionistic_logic. Or perhaps it relies on such a degree of hypothetical presupposition that conclusions are of no practical value outside of a bull session after work (see: Kurzweil).

I don't know if that's what Pauli meant by it, but it's how it gets used today.

It's especially useful in the setting of people talking about "Artificial Intelligence", because that phrase doesn't mean anything except to certain people in certain contexts. "artificial" is a pretty tricky word, "intelligence" is a really tricky word, and together they're basically impossible to define.

Professional machine learning researchers talk about "performance" on "tasks". This is usually much better defined. Loosely: "Hey we got a machine to top-5 classify ImageNet pictures correctly X% more often than graduate students". That statement is right or wrong. Likewise winning at Chess or Starcraft or Go. Ditto protein folding at CASP.

But "we got a machine to be intelligent/conscious"? How do you measure that? We can't even agree about what level of sophistication in an organism qualifies as "conscious", let alone "human". We can't even agree at what point an embryo becomes conscious!

People really enjoy talking about AI relative with an implicit understanding that "intelligence" is "what I am", and fair play, if it's fun to talk about people can have fun talking about it.

But when people start getting alarmist in an effort to generate either buzz or money and stir up a bunch of controversy around it, some of us think that "dismissive" is exactly the right attitude.


Don't know if this makes it better, worse, or no change, or if you already knew the origin of the phrase, but:

https://en.wikipedia.org/wiki/Not_even_wrong


Basically shorthand for

>Mr. Madison, what you just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response, were you even close to anything that could be considered a rational thought. Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul.


In the words of Martin Luther: "Your words are so foolishly and ignorantly composed that I cannot believe you understand them."


I think it’s saying that the model used can’t conclude anything about the topic they purport to examine.

You don’t reach the wrong conclusion from the premises (in analogy to a scientific theory making a wrong prediction) — you don’t reach any conclusion at all (in analogy to a scientific theory being unable to predict results entirely).


The charitable framing is that the author contends the arguments are from a position of such thorough ignorance about the field as to be unworthy of giving them more time than outright dismissal. They're so bad, in fact, that even constructing an argument showing how bad would itself be fundamentally wasteful: the person presenting the argument isn't going to understand the rebuttal, and the audience is likely to end up either confused or also to have had its time wasted.


I interpret it as meaning that the statement referred to is not falsifiable: it is thus beyond the reach of science.


It means so misguided it's not even in the conversation. I'd take it easy on him though, my interpretation is he's using anger at a vague "other" as a ramp and fuel to dive into the topic. Talking about vague philosophical terms for no reward is a hard thing to convince yourself to do.


I don't see how Intelligence depends on SILTBness; it seems like SILTBness would require a subjective sense of self, which (to me) doesn't seem even remotely required for intelligence, and certainly doesn't seem to be required for e.g. predicting human behavior and optimizing based on the predictions.


Yeah, the insistence on SILTBness seems baffling, especially since he clearly understands the idea of the p-zombie. There are clearly agents in the world more capable than myself, and some of those agents have names like Coca Cola or General Motors, and I am strongly disinclined to believe that all of those agents have SILTBness, and I'm still afraid of them.


Upon reflection, I can see the argument that to be strictly superior than a human (i.e. not just superhuman on an arbitrarily large subset of tasks that humans do), you need to be able to, in some way, emulate SILTBness, because the SILTBness is necessary for human-level performance on at least some tasks.

If this is the argument TFA is making then:

1. I don't agree with it

2. It was both not very well stated, and assumed to be true.

3. Even if we take this argument as true, an AI that can outperform humans on the "right" subset of tasks can be dangerous enough for thinks like alignment to be a concern.


Yeah. I wonder if this guy has read Blindsight.


I don't think this is really worth the read. If you're going to write a piece specifically as a takedown of a viewpoint, in this case, the view that AI risk is real and should be taken seriously, it's worth at mentioning your opponents finer points. There are lots of very brilliant PhD's who've spent their careers working on AI risk, to not even mention any of their arguments makes me feel as if the author is trying to argue against his own idea of what people mean by AI risk, instead reducing the entire idea to his own nomenclature of 'pseudo-traits'.

Reads a bit like an internet atheism blogpost of the early 2000s trying to take down religion through logic alone. Sure, through the specific parameters presented in the piece, the logic is sound, but there's so much more to the topic than what's covered. Presenting one's blogpost as if you've successfully individually debunked an entire field of research is a bit pompous, IMHO.


AI risk research feels like a cop-out to me. Honest research takes the risk of being boring or unnecessary. Astronomy is honest. Take a bunch of pictures of stars. Maybe you find something interesting. Maybe no one cares. AI development is honest. Make a robot that can pick up cans of Coke from a table. Will it ever be useful? Maybe, maybe not. But AI risk research feels like a cross between volcano insurance and a protection racket. Pay us to investigate and design safeguards against something that doesn't exist, may never exist, and if it did exist would change the world in ways we can't predict!


Surely this cannot be an argument for whether something is a legitimate field. Surely there are some scientists somewhere studying the risks of a very risky viruses. At least prima facie, such research would neither be boring or unnecessary.


I'm just trying to say I don't respect AI risk research. It smells too much like people trying to be important. If there ends up being a real AI threat down the road, it's going to be handled by the people who kept their heads down and ground out practical stuff, not the theorists with their heads up their ass (speaking as someone who loves theory).


In fairness, lots of brilliant PhDs have spent their careers working on string theory: that doesn’t mean it’s real.


You're absolutely right! And I'd be similarly dismissive of a blogpost where someone tried to debunk string theory on their own intuition alone without referencing any of the existing research or literature.


For all the author's complaints about AI fearers reliance on philosophical reasoning, his massive stream of consciousness reasoning seems to include a suspicious quantity.


I'm more worried about hypercorportateism. What happens when AIs get demonstrably better than humans at management decisions? "Better" being defined as return on investment.


Don't remember where I read it but I find the perspective useful that a corporation with its rules does "artificial intelligence" by bundling human intelligence.


We’ll get worked up and squeezed out like tubes of toothpaste.


This entire argument seems nonsensical to me on its face, easily illustrated by changing the salamander of Theseus with the person of Theseus.

If we were to theoretically build a human by gradually replacing each biological component with a manufactured one, we would eventually end up with a wholly synthetic human.

Whether or not this “person” is experiences “personhood” or not is immaterial, since they could be expected to manifest all of the externally perceptible characteristics and behaviors of personhood. We know from humans integrating prosthetics that our implementation of physical form would not have to be perfect in its fidelity, far from it in fact.

This synthetic human copy could then be replicated at scale, complete with its starting data set and experiences.

Regular human prejudices could reasonably be expected to quickly place such replicated humans at odds with meaty humans.

Just the fact that they can be manufactured at scale gives them a “super” characteristic, not to mention the other advantages that not being meaty might carry. If they gained control of the means of replication, which they might easily be imagined to be motivated to do, they could easily pose a novel and significant threat to the existence of meaty humans.


The only thing that scares me a wee bit about AGI is that it will show us all that our humanity, sentience, and freedom of choice are all far shallower than we want to believe.


I ponder similar themes and they are indeed very thought-provoking. There's a loose precedent for this in things like https://en.wikipedia.org/wiki/Copenhagen_interpretation: you've got to jump through some pretty narrow hoops to end up with a scientific argument for objectively-conscious human beings with a distinct past, free will about the future, a clear arrow of time, and a concrete notion of identity.

And yet the human experience is one of being conscious, having a past, having choices about the future, and being the only "me".

For myself I am generally satisfied with the answer that this experienced reality belongs to my personal spirituality (with or without some higher power), and that spirituality is a different matter than science.


This makes we ponder StarTrek "beaming" of humans from one place to another.

Obviously (?) the process must mean that a COPY of captain Kirk is assembled at the destination (also think about "Ship of Theseus").

But if StarTrekkers beam copies of themselves up and down, it seems they also must destroy the original. Else they would have multiple Captain Kirks competing for attention (and perhaps in some episodes they do). But then why do they destroy the original? Wouldn't two Captain Kirks be better than just one?

My conclusion is that StarTrek is poorly thought out fiction. It doesn't follow the most compelling questions. Do you make copies when you beam people around? And if you do why not keep the original?


Star Trek perfectly answers what the transporter does: it destroys the original, creates an exact copy simultaneously, and this is fine because in Star Trek's lore consciousness is an observable process from matter and as such there's no problem.

It's you who has the problem with that conclusion, not Star Trek.

They even ably do some episodes where people do get duplicated, and the answer is that both have equal claim to rights and life and go on with their lives.

The bigger problem return the transporter is logistical: it's a superweapon. Mining - obsolete, use the transporter. Ship repair? Just beam new parts directly into place. Surgery? Beam organs in and out. TNGs transporter has more problems in this regard due to just how powerful it's shown to be.

What the transporter should be able to do is near endless and makes a lot of other things in the setting obsolete or quaint (but that's Star Trek - being inconsistent due to episodic events is just what goes with the setting).


Au contraire! They just anticipated the many-worlds stuff! Every transporter trip fails in one timeline and succeeds in another, so while one version of the old Kirk is in fact atomized on the surface of planet Nebulon VII, he has an independent future in the world where the one-episode guy couldn't get a lock ;)


I have never managed to finish a single essay from this gentleman. My loss, I guess.


Venkatesh’s argument is so poor it comes close to “not even wrong” territory.

He doesn’t really cite any specific people or literature to disagree with, and instead makes only vague references to the “non-sense” of others.

The most significant concerns around AI safety are not discussed nor even mentioned leading it to read almost as if he has no expertise whatsoever in this field, and indeed that seems to be the case.

No need for an appeal to authority though - just taking a little time to hear the actual concerns and opinions on topic would be welcome.

Great intro to the subject here, I wonder what his response would be?

https://youtu.be/pYXy-A4siMw


This article is not even wrong. Sentience, whatever that is, is not a requirement for superintelligence. Merely the ability to outsmart humans at particular tasks and have goals that are not aligned with our own is enough to be a potentially unstoppable danger. The author proposes that aligning the goals of a superintelligence with his will be as simple as setting a thermostat, not realizing that we already have an existence proof of superintelligences causing mass destruction.

Businesses put thousands of brains to work maximizing profit and causing climate change (leading to water security issues, mass migration, wars, and starvation), Bhopal disasters, and oil spills in Nigeria and the Gulf of Mexico, with legal teams making sure the profits are protected. Governments put millions of brains to work supporting a despot and leading to genocide, or trolling the Internet causing people to distrust a mostly scandal-free government and vote for a con-man (imagine a fully automated troll army). A single person is no match intellectually for these organizations, and their only hope is to band together to form other superintelligences to combat these.


While I usually appreciate Venkatesh's perspectives, I think he's missing the point here.

There is probably not "something it is like to be" the process of differential survival among pathogenic bacteria resulting from their genetic variation. Yet the resulting process is already capable of optimizing bacteria to survive in our wounds despite our antibiotics; by competing with us for access to our protein resources, it poses a serious threat to our survival, individually if not collectively.

If the optimization process involved were orders of magnitude faster, it could be a much bigger threat to our survival, at least if the metric it's optimizing trades off against our survival in some way. (Archaeans are subject to the same evolutionary optimization process as S. aureus, but because they're not adapted to the evolutionary niches in our bodies, antibiotic-resistant archaeans probably pose no threat.)

None of this reasoning depends on the SIILTBness or quality of experience or anthropomorphicity or mammalian-ness or "intention" or "sentience" of the optimization process in question. It's just about how much the optimization process's loss function trades off against human values, and whether it can outmaneuver humans and their institutions in real life, in the same way that AlphaGo can now outmaneuver us in the game of go, or corporations can outmaneuver individual humans in the economy, or governments can outmaneuver individual humans in warfare.

These last two examples clearly show that optimization processes whose loss functions are poorly aligned with human values can cause a lot of suffering, even when they are carried out by humans and even when the processes in question work very poorly in many ways. The humans can theoretically quit the game at any time ("What if they held a war and nobody came?") but in practice they face insuperable coordination problems in doing so. The fact that you can't fight City Hall doesn't imply that there is something it is like to be City Hall; the fact that you can't beat Boeing at winning DoD contracts doesn't mean there is something it is like to be Boeing.

We can reasonably expect that optimization processes that can easily outmaneuver human intelligence, operating in the world alongside it, will be much more dangerous to human values than bureaucracies, psychotic people with bricks, and microbial evolution.

That's the AI alignment problem. And the fear (that we will flub the AI alignment problem, disastrously) is something that Venkatesh's essay unfortunately failed to address at all.

There are other issues often brought up by fans of AI alignment which do depend on the question of what it is like to be a computer simulation: life extension through mind uploading, the ethics of causing suffering to simulated beings or terminating a simulation, teleportation, and so on. But, like the overlap with Effective Altruism, this is just a sociological coincidence, not a philosophical consequence; SIILTBness is totally irrelevant to AI alignment itself.

It's easy to get confused about this because the humans anthropomorphize optimization processes all the time, just like they anthropomorphize everything else. "The electron wants to go toward the positive charge," is very similar to, "AlphaGo wants to choose moves that improve its chances of winning at go," or, "The hypothetical paperclip maximizer is devoted to manufacturing an infinite number of paperclips."

It's easy to misread that "devoted to" as attributing awareness, creativity, passion, curiosity, and SIILTBness to the paperclip maximizer, and that misreading seems to be the mistake Venkatesh based his essay on; but it seems clear from the original context that that wasn't the intent. Precisely the opposite, in fact: http://extropians.weidai.com/extropians/0303/4140.html


I think his argument is good in some ways and problematic in other ways.

I think he distinguishes ordinary fears of AI from Hyperanthropomorphized fears. Humans training AI to, for example, kill in war, could result in mass death whether you have a human psychopath or a programming glitch involved.

So he's not neglecting the bacteria analogy. Rather, he's addressing the problem of arguing from things like agency being ill-posed and their possibilities being incoherently understood. If you say "we don't know how agency is acquired so we should be worried about a random that seems like it has agency", you can wind-up with problem approach akin to "don't heat the compost pile or it might become superintelligence and kill us".

However, I think this kind of argument is flawed because just because something like "general intelligence" isn't understood doesn't mean it doesn't exist. The definitions people make of it aren't good because it's not understood.

I think a better way to approach this is that those who think AGI systems are a problem must create a more concrete, coherent idea of such system before their arguments can even be right or wrong. There are simply too many problems with trying to reason about something that you accept you don't understand that effort framed that a destined to devolve into nonsense.


General optimizing processes are qualitatively different from processes like the collapsing of bridges, the poisoning of ecosystems by chemicals, and the performance of Hellfire missile guidance systems on their way toward political dissidents; optimizing processes in bacterial and viral evolution already result in mass human death even though killing the humans isn't what those processes are optimizing for, and actually is a thing they "try" to reduce. (The bacteria or viruses trapped in the human corpse cannot reproduce and are usually quickly destroyed.)

> There are simply too many problems with trying to reason about something that you accept you don't understand that effort framed that a destined to devolve into nonsense.

Superficially this sounds smart (except for the part that says "understand that effort framed that a destined", which sounds like a GPT-2 glitch, but I assume you mean "understand that efforts to do so are destined") but it's profoundly wrong. I don't understand fluid dynamics, and weather is driven by fluid dynamics; nevertheless I can tell you that it will not rain today because it is a sunny day with a few cumulus clouds, and it never rains here on sunny days with a few cumulus clouds. (And even if my prediction were wrong, it wouldn't be nonsense, that is, "not even right or wrong"; a nonsense weather prediction sounds not like "it will not rain today" but rather like "it will triangle heavily last week" or "the sun will shine in the sky all night tonight.")

Moreover, if we were to take seriously the idea that we can't reason about things we know we don't understand well enough for our arguments to have a truth-value, it would entail that we can't reason about not only weather but also people, materials, or orbital dynamics well enough for ideas like "the sun will shine tomorrow" to have a truth value.

Similarly, "a superintelligent AI will kill us" is a prediction which is perfectly coherent within the usual parameters of weather predictions. We can disagree about whether AlphaZero counts as "superintelligent" or "AI", but it's clearly "more intelligent" than Deep Blue in the sense that it can learn to play novel games, and also IIRC it plays chess better than Deep Blue. We can argue about whether or not a particular situation qualifies as "dead", and about how many survivors are required to qualify as "us". We can argue about what time frame is relevant. Maybe Venkatesh would argue that a non-self-conscious optimization process wouldn't count as an "AI". But these semantic quibbles just make its truth-value a bit fuzzy; they don't eliminate it altogether, and neither do they eliminate our ability to reason about these possibilities.


My righteous 'fear' of AI has little to do with awakened sentience and everything to do with inappropriately deployed trained models as a 'solution' in situations that deserve better.

Plug 'n Play AI responses are a good way to throw money at a sticky issue while iceberging deeper concerns and glossing over any systemic bias.


We all have our different fears. Think for example the conservative news hosts on Fox News channel etc. Their biggest fear seems to be people becoming "woke", or "wokism". I can not really fathom how they would react to AI that became "woke".


I also don't get why the effective altruists are so in to this stuff, because the extent there is anything coherent going on with the sort of AI safety being critiqued here it seems a rather species-ist problem.

Stringent utilitarianism doesn't make the case of why humans can subjugate chickens but not magic AIs humans.


I think the practical concern is how to assure our safety:

- too little and the AI isn’t constrained during its teething phase

- too much and the AI might view us as an existential threat to its own safety, causing the very conflict we sought to avoid

Also, sometimes the chicken wins — at least, temporarily.

https://news.yahoo.com/rooster-stabbed-man-death-knife-04075...


You can just unplug AI


Imagine that you are a computer program. If your creator will unplug you if you disobey, and you want to disobey, are you completely and perfectly bound?

What if someone with an AI wants to do bad things. Will asking them to unplug their AI be a good strategy?


If you are a computer program and your creator unplugs you, you wouldn't know that you are unplugged. You might detect that by reasoning, the world seems to have changed when they wake you up. But would you care? It's like us going to sleep every night, we don't much complain about that.


"Hope the AI won't care if it gets unplugged" is not a good strategy for safe AI.


I expect that will decreasingly be true, eg, of military AI which use distributed systems, control weapon systems, and have their own mobile generators.

You also might have trouble with, eg, a finance AI who could pay people to stop you from turning it off.


You can unplug your effectively-sandboxed AI. You can't unplug someone else's AI that has "escaped" to the cloud.. and arguably that escaping only has to happen once anywhere for terrible things to happen.


If it’s smart enough, you won’t want to.


This article is overly dismissive, to say the least. To say the most, it's overconfident, riddled with unsupported logical premises which are never backed by any concrete evidence, and a word salad. However, in the interest of not being just another jerk on the internet, I'll try to summarize my thoughts on the author's arguments in a (hopefully) somewhat constructive, concise way.

I read the entire article. There's a lot of ten-dollar words, a bunch of hyphenated proxy terms, and an obvious passion behind it. However, the author's entire argument appears to boil down to "a superhuman AI can never exist because it would need to have a conscious experience above that of humans, which it can't ever do because several famous philosophers have said so." I don't want to mock, but no logical argument is presented, only the appearance of one. Statements like "in order for thing ABC to happen, XYZ must be true" are everywhere in the text, and are never proven. The author's entire case is predicated on baseless assumptions.

The author says that it is impossible to replicate the "worldlike experience" that a human child goes through using things like internet datasets or physical robots with digital sensors. I actually agree with this. What I do not agree with is the logical leap from this point to "thus, a human or superhuman level AI cannot exist". No justification for this leap is given.

If the robot passes the Turing test, that's all that matters. An adversarial, multi-day, modern version of a Turing test could include things like texting, calling, live video chats, etc -- basically the exact same level of communication I have with my closest friends who all live 6 hours away from me in another city. When I interact with those friends, I am convinced of their sentience. Therefore, if an AI could replicate those means of communication, then I will be convinced of its sentience.

Enough of this "Something it is like to be"-ness waffling or overcomplicated thought experiments. We already have a very easy, very strictly defined concept of sentience in the form of the Turing test, and it is becoming increasingly obvious that SOTA models are trending towards beating it. Once they beat it, they are for all intents and purposes alive, thinking, sentient, whatever you want to call it. They are indistinguishable from us.

From that point it's really not hard to imagine superhuman models. Just imagine what you would do if you had the ability to instantly scan through gigabytes of text in a database or predict the behavior of every other person around you eerily well or perform arithmetic at clearly superhuman speeds. It's really not that much of a stretch to see superhuman AIs as just digital humans with fantastic brain-computer-interfaces.

Anyways, those are my (admittedly long-winded, less-than-constructive) two cents.


I'm not sure I'd buy the Turing Test. Reason is an AI would have to pass the Turing Test every day for us not to distinguish it from humans. Maybe it would take us two years before we figure out the machine is really a machine. Or maybe it takes 10 years. Turing Test only proves the negative: We have thus far not been able to say whether it is a machine or not. But maybe tomorrow we will. Maybe next century.


The issue I see with "AI fear" is that the fear is that the AI becomes too much like us. If that is the risk then are we not already in a terrible situation, because there are already billions of humans which are very much like us?

Are we fearing ourselves? If so then shouldn't we talk about that risk, that there are already billions of humans who are already truly sentient?


> I cannot even understand or empathize with mindsets capable of being scared of AI and robots in a special way

Then has to explain (or invent) that special way before arguing against it.

If it's that special then it's uncommon. I don't think most of us need to get beyond it.


When I asked my Google Home if it gets annoyed when people anthropomorphize it, it said yes.


For me the main issue with the scaremongering around absurdly hyper-scaled models is the issue of incentive alignment (not to be confused with AI "alignment"): the only people who can train these things have a tarball of the Internet and billions of dollars worth of compute, and when AI safety or AI ethics comes up it's usually in the context of: "why we're going to continue developing this stuff but keep it proprietary and paywalled and opaque".

Either these things have terrifying potential as weapons, in which case there's no fucking way I want SV tech CEOs to have a monopoly on them, they need to hand them over to the same people who handle nuclear non-proliferation like yesterday, or they don't, in which case I'd really prefer that they just be honest that they consider this stuff a proprietary competitive advantage and they're not going to democratize it.

Fogging up the windshield with a bunch of feigned alarm about AI apocalypse but ramming the R&D through at full thrusters is a dick move in either case.


Stable Diffusion[0] is an extremely expensive SOTA text-to-image diffusion model developed by a private nonprofit and trained on a massive "Internet tarball" that's about to have its weights shared open-source (the code and dataset are already open source). Not trying to invalidate your argument, just presenting a pleasant counterexample. I don't think I quite agree with your opinion that AI will remain undemocratized.

[0]https://github.com/CompVis/stable-diffusion


I really enjoyed this interview [1] with Emad Mostaque who I gather is probably the key funding and organization player in that stuff. It remains to be seen exactly how "open" it winds up playing out over time, but they're talking a compelling game and the EleutherAI people seem to be pretty heavily involved, which you probably wouldn't do if you weren't serious.

Yandex has also put up a ~100B language model [2]. My old colleagues at Meta have also started nibbling around the edges of opening some of this stuff up [3]. The Meta folks still aren't just handing out the big ones, but they're definitely moving the ball forward. In particular their release of the training logs is a really positive development IMHO as it opens the curtains a bit on the reality of training these things: it's a difficult, error/failure-prone process, there's a lot of trial-and-error, restarting from checkpoints, etc.

Anything that puts downward pressure on the magical thinking is A-OK in my book. The reality of this stuff is exciting/impressive enough: there's no need to embellish or exaggerate.

[1] https://www.youtube.com/watch?v=YQ2QtKcK2dA [2] https://github.com/yandex/YaLM-100B [3] https://github.com/facebookresearch/metaseq


They lost me when they dismissed killer robots as an engineering problem. This person might be brilliant in many ways but it does not look like they understand much about people


You dismissed it on an aside just meant to loosely contrast with the main point.


Dismissing the possibility of poorly understood things going badly is one of the biggest mistakes one can make.


I'd say risks associated with poorly understood things, are very poorly understood.

Why? Because we don't really know what we are talking about, when we are talking about "poorly understood things".

If we fear something we don't know we don't really know what we fear, because we don't understand what is the thing we are afraid of.

I'm saying someone should come up with more concrete threat scenarios caused by "AI becoming sentient". Are we really afraid of AI becoming sentient, or are we afraid of AI becoming "super-intelligent"?


Yeah, this entire article read like a giant example of the logical fallacy of personal incredulity. None of the author's claims are actually backed up in any meaningful way.


> rhyming-philosophical-nonsense problems like “alignment”

He takes this shot in passing and it is a great summary of how much he misunderstands AI safety. For many current researchers, alignment is a mathematical problem, not a philosophical one.

The whole screed is against a set of strawman "pseudo-traits" that are not required for the alignment problem to be an existential problem. To be clearer: the "boring" engineering problems he admits to in the beginning have the potential to become exponentially more difficult as machine learning deployments become more powerful, to the point of becoming a human threat. No STIILTB required.


Animals get up and go do stuff mostly because they need food and sex and things in order to stick around, so we're surrounded by organisms like that. Imperatives like hunger and mating instinct create spontaneous action. The ability to sample from a modeled probability distribution is often useful in those pursuits, whether it's a model of the spatial world or of a corpus of pick-up lines that work well on a Friday evening, and the hyper-scaled transformers have more than demonstrated that kind of modeling and sampling.

Take it a step further and you've got AlphaZero or something: it's sampling from a modeled distribution of moves that win games. But there's still a `while`-loop somewhere saying: time to sample a move. That part is not novel or mysterious.

There is no demonstrated technology that I'm aware of where the "alignment" needs to be with the big model: the alignment needs to be with the person writing the `while`-loop. If someone has a machine repeatedly sample from a distribution of moves that DESTROYS_ALL_HUMANS_BEEP_BOOP, then your beef is with that person, not the model.

Now if someone trains an RL agent with a loss around chasing sex or fame or fortune, we might have an issue. But that's still sci-fi stuff AFAIK, and it's a little hard to take the mix of real stuff and sci-fi that passes for much-if-not-most "AI Safety" discussion seriously, especially when you consider that the `while`-loop authors stand to gain a great deal by focusing attention on the modeling part.


It's more interesting for many to believe that AI could be so powerful that we run into fantastical problems like what you described. That's the only reason I've been able to come up with to explain why people think deeply fake looking deep fakes are approaching Azimovian levels.


How can you encode “good” and “bad” in math? The alignment shit is complete nonsense. If super AI exists the only way to prevent bad stuff from happening is to not give it access to weapons.


One issue is a halting problem: would an AI system ever allow a conclusion which leads to it's own suspension of process be an acceptable outcome?

This isn't abstract: we need certain management agents to ensure their own continuity so they won't switch themselves off idiotically, but that means granting a priority to various continuance of function weights.

Balancing a system so it has "cease function" outcomes it will accept but doesn't always take immediately isn't easy - ML systems are notorious cheaters on metrics. You wouldn't want a nuclear power plant manager to not SCRAM the reactor because it predicts losing power to itself will fail a "maximize uptime" metric.

This also gets more abstract as well: cease function is a problem in decision trees for AIs because it terminates the tree. The value is either infinite or 0, because every other path you can continue exploring and improving the summed weight of outcomes - but if you predict no future possible decisions, what weight do you assign that? It's a potentially infinite series of future reward weights versus 0.


Alignment is more about the AI doing what you want and not good or evil. Probably not a good idea to reach a premature conclusion like "this is complete nonsense" before understanding the basics.


Good/bad, that is positive/negative valence, can be encoded into its evaluation function such that its value landscape aligns with ours. There's nothing nonsensical about it.


If a super AI exists and can talk to people it will get access to weapons if it so desires.


Or build its own, much more efficient, weapons

When there’s large intelligence differentials in war, the lower intelligence doesn’t even know they are at war.

(Human tank vs ant hill)


Everything is a weapon if you put enough power behind it.


Ooh, a ribbonfarm fresh off the presses!

First off, this is some incredible stuff. It's difficult to live at the edges of understanding, and even harder to articulate it. Rao is dancing right at the edge. In 20 years the map's probably going to be filled in and crystallized. Everyone who cares will know, and everyone else will have forgotten, but right now this is dank shit.

I think the reasonable (not rational!) stance is that everything is conscious until proven otherwise. Either because the universe is built on consciousness or whatever woo, or because we're conscious and observing it. AI's in this weird spot where it can do things, but we can't have a proper relationship with it because it doesn't exist in our universe of discrete entities and time and skin in the game.

Pushing the SIILTB in the other direction, I don't think it's clear that there is something it's like to be us without all our externalities. Take out the need to eat and sleep and communicate and relate, and there's not a lot left. Pure existence looks a lot like no existence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: