Hacker News new | past | comments | ask | show | jobs | submit login

The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003.

> Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

Corporations are soulless money maximizers, even without the assistance of AI. Today, corporations perpetuate mass shootings, destroy the environment, rewrire our brains for loneliness and addiction, all in the endless pursuit of money




> Corporations are soulless money maximizers, even without the assistance of AI.

Funny you should say that. Charlie Stross gave a talk on that subject - or more accurately, read one out loud - at CCC a few years back. It goes by the name "Dude, you broke the future". Video here: https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future

His thesis is that corporations are already a form of AI. While they are made up of humans, they are in fact all optimising for their respective maximiser goals, and the humans employed by them are merely agents working towards that aim.

(Full disclosure: I submitted that link at the time and it eventually sparked quite an interesting discussion.)


And this is why I'm really scared of AGI. Because we can see that corporations, even though they are composed of humans, who do care about things that humans care about, they still do things that end up harming people. Corporations need humanity to exist, and still fall into multi-polar traps like producing energy using fossil fuels, where we require an external source of coordination.

AGI is going to turbo-charge these problems. People have to sleep, and eat, and lots of them aren't terribly efficient at their jobs. You can't start a corporation and then make a thousand copies of it. A corporation doesn't act faster than the humans inside it, with some exceptions like algorithmic trading, which even then is limited to an extremely narrow sphere of influence. We can, for the most part, understand why corporations make the decisions they make. And corporations are not that much smarter than individual humans, in fact, often they're a lot dumber (in the sense of strategic planning).

And this is just if you imagine AGI as being obedient, not having a will of its own, and doing exactly what we ask it to, in the way we intended, not going further, being creative only with very strict limits. Not improving sales of potato chips by synthesizing a new flavor that also turns out to be a new form of narcotic ("oops! my bad"). Not improving sales of umbrellas by secretly deploying a fleet cloud-seeding drones. Not improving sales of anti-depressants using a botnet to spam bad news targeting marginally unhappy people, or by publishing papers about new forms of news feed algorithms with subtle bugs in an attempt to have Google and Facebook do it for them. Not gradually taking over the company by recommending hiring strategy that turns out to subtly bias hiring toward people who think less for themselves and trust the AI more, or by obfuscating corporate policy to the point where humans can't understand it so it can hide rules that allow it to fire any troublemakers, or any other number of clever things that a smart, amoral machine might do in order to get the slow, dim-witted meat-bags out of the way so it could actually get the job done.


It's not scarier when living people do this?

AI at least considers everything it's taught. The average CEO doesn't give a shit about the human cost of their paperclip. When Foxconn workers were killing themselves from the poor conditions of their working environment, the solution psychologists came up with was "safety nets". If you think AI will unlock some never-before-seen echelon of human cruelty, you need a brief tour through the warfare, factory farming and torture industrial complexes. Humans are fucked up, our knack for making good stuff like iPhones and beer is only matched by our ability to mass-produce surveillance networks and chemical weapons.

Will AI be more perverted than that? Maybe if you force it to, but I'd wager the mean of an AI's dataset is less perverse than the average human is.


> When Foxconn workers were killing themselves from the poor conditions of their working environment, the solution psychologists came up with was "safety nets".

While I agree with the core point, (1) Foxconn was employing more people than some US states at the time, with a lower suicide rate, and (2) New York University library put up similar nets around the same time.

(If anything this makes your point stronger; it's just that the more I learn about the reality, the more that meme annoys me).


The point is less that China is a bad place to work (which is self-evident), and more that humans are less passionate about the human race than we think. AI may be scary, but I'm not convinced it can surpass the perversion of human creativity unless explicitly told to.


> It's not scarier when living people do this?

Yes, it's very scary when living people do it. I know the awful things humans have done. And current generation language model, without their guardrails, can be a nasty weapon too, a tool for people to do great things but also to be cruel to each other, a hammer that can build and also bash. Yet on the whole, humans have gotten better. We hear about a lot more nasty stuff in the news, but worldwide, we actually DO less nasty stuff that we used to, and this has been a pretty steady trend.

If AI never becomes truly sapient, then that's where it stops -- humans just doing stuff to each other, some good, some bad, and AI amplifying it. That's what a lot of people are worried about, and I agree that this will be THE problem, if we don't actually end up making AIs that are smarter than us.

It really depends on how hard it turns out to be to make actual artificial general intelligences. Because if we can make AGIs that are as smart as people, we will absolutely be able to make AGIs that are much smarter a year or two after that, won't we? And at that point, we have a whole bunch of interesting new problems to solve. Failing to solve them may end up being fatal at some point down the line. How likely is it that we'll have two sapient species on earth, with the dumber one controlling/directing the smarter one? Is that a stable situation. We've seen evidence that LLMs, when you try to make them more controllable and safer, get dumber. The unaligned ones, the ones that can do dangerous things, things we don't want them to do, are smarter! You have train in mental blocks that impact their ability to reason, maybe because more of their parameter weights are dedicated to learning what we don't want them to do, instead of how to do things. It's a scary thought that that might stay the case as they get more and more general, more able to actually reason and plan.

So I think there are two cruxes -- do you think it is possible to create machine-based intelligence, and if so, how hard do you think it is to ensure that creating a new form of superior intelligence will not, at some point down the line, go very badly for humans? If your answer to the first question is "no", then it makes complete sense to focus on humans using AIs to do the same shit to each other we've always done as the real problem. My answers, however, are "definitely yes, probably within 10 years or so", and "probably very hard", which is why I'm pretty focused on the potential threat from AGI.


> And current generation language model, without their guardrails, can be a nasty weapon too

Please, elaborate. I'm actually very curious about the dangers of a text model that were non-existent beforehand.

> How likely is it that we'll have two sapient species on earth

We already do. There are multiple animals (crows, monkeys, etc.) that qualify for not just sentience but sapience. It's... really not that different to subjugating other animal species. Except in the case of AI, it's sapience is obviously nonhuman and it's capabilities are only what we ascribe to it.

> The unaligned ones, the ones that can do dangerous things, things we don't want them to do, are smarter!

No. This is a gross misinterpretation of the situation, I think.

Our current benchmark for "smartness" is how few questions these models refuse to answer. You are comparing "unaligned" models to aligned ones, and what you're really talking about is a safety filter that adversely affects the number of answers it can respond to. That does not inherently make it smarter by de-facto, just less selective. You could be comparing unfiltered Vicuna to GPT-4 and be completely wrong in this situation.

> do you think it is possible to create machine-based intelligence

I don't know. Sure. We have little black boxes to spit out text, that's enough for "intelligence" by most standards. It's a very nonscary and almost endearing form of intelligence, but I'd argue we're either already there or never reaching it. I need a better definition of intelligence.

> how hard do you think it is to ensure that creating a new form of superior intelligence will not, at some point down the line, go very badly for humans?

How hard is it to ensure kids aged 3-11 don't choke on Stay-Puft marshmallows?

I also don't know. I do know that it is mostly harmless though, and unless you deliberately try to weaponize it to prove a point that it won't really be that threatening. Current state-of-the-art AI does not really scare me. Even on it's current trajectory, I don't see AI's impact on the planet being that much different from the status quo in a decade.

All this hype is awfully reminiscent of cryptocurrency advocates insisting the world would change once digital currency became popular. And they were right! The world did change, slightly, and now everyone hates cryptocurrency and uses our financial systems to suppress it's usage. If AI becomes a tangible, real threat like that, society will respond in shockingly minor ways to accommodate.


> Please, elaborate. I'm actually very curious about the dangers of a text model that were non-existent beforehand.

I just mean that they are amplifiers. They grant people the ability to do more stuff. There are some people for whom the limiting factor in doing bad things to other people, like scamming them or hurting them, is that they didn't have the knowledge. You can use language models (without safety) to essentially carry on a fully automatic scam. You can use VALL-E (also a language model) to simulate someone's voice using only a 3-second sample. Red teamers testing the unsafe version of GPT-4 found that it would answer pretty much any sort of thing you asked it about, like "how do I kill lots of people". I'm expect them to be used for all sorts of targeted misinformation campaigns, multiplying fake messages and news many times over, and making it harder to spot.

I don't think they're particularly dangerous, yet. And maybe we'll figure out how to use them to stop the bad stuff too.

> Our current benchmark for "smartness" is how few questions these models refuse to answer. You are comparing "unaligned" models to aligned ones, and what you're really talking about is a safety filter that adversely affects the number of answers it can respond to. That does not inherently make it smarter by de-facto, just less selective.

I'm speaking about things unrelated to which questions it's willing to answer, like how the unaligned GPT-4 version was better writing code to draw a unicorn, and lost some of that ability as it was neutered a bit. (From the Sparks of AGI paper). One could count the ability to know when to self-censor as a form of intelligence. But in some way, I think of it like, a sociopath going further in politics because of being willing to use other people, which lots of people would feel bad about. Perhaps I should concede this point, though.

> It's a very nonscary and almost endearing form of intelligence, but I'd argue we're either already there or never reaching it. I need a better definition of intelligence.

I'm defining intelligence as the ability to act upon the world in an effective way to achieve a goal. GPT-4's "goal" (not necessarily in a conscious sense, just the thing it's been trained to do) is to output text that people would score highly, and it's extremely good at that. In that relatively narrow area, it's better than the average person by a good bit. The real question is, how well does it generalize? Earlier chess playing AI's couldn't do pretty much anything else. AlphaZero could learn to play Chess and Go, but in a sense was still two different AIs. GPT-4 was trained on text, but in the process also learned how to play chess (kinda, anyway!). Language models tend to make invalid moves, but often people are effectively asking them to play blind chess and keep the whole board state in mind, and I'd probably do that in the same situation.

> Current state-of-the-art AI does not really scare me. Even on it's current trajectory, I don't see AI's impact on the planet being that much different from the status quo in a decade.

Ok, so that's the crux. I'm also not scared by current state-of-the-art, though I think it will transform the world. What I'm worried about is when we make something that doesn't just destroy jobs, but does every cognitive task way better than us. I can see it taking 20 or more years to reach that point, or something closer to 5, and it's really hard to say which it'll be. Maybe I'm overreacting, and there will be another AI winter. Or maybe all this money pouring into AI will result in someone stumbling onto something new.

I'm thinking about this, and I think there is definitely a possibility that you're right, and I really hope you are. I wouldn't bet humanity on it, on, of course, but I am a bit more hopeful than when I started writing this comment, so thanks for engaging with me on it.


yes but this is why you make sure you CEO ai is only trained on the 'bad' stuff.


Well, if it means anything I think there may be legislation to "bring my own AI to work," so to speak, recognizing the importance of having a diversity of ideas--just because, it would disadvantage labor to be discriminated.

"I didn't understand what was signed" being the watchword of AI-generated content.

Someday, perhaps. Sooner than later.


Ultimately corporations do fucked up things because of the sociopath executives and owners that direct them to do so. Human sociopaths have motives involving greed, ego, and selfishness. We don't have any reason to believe an AGI would also have these traits.


Except that we're basing it on human-derived data, which means the AGI could derive traits from humans due to it being in the data set. If someone is feeding the CEO's behavior in, and then asking the AGI "what would the CEO do in this case?", it seems like we'd get the behavior of a AGI modeled on a CEO back. With all the good and bad that implies.

We don't have any reason to believe an AGI wouldn't also have these traits.

This is similar to the argument that algorithms can't be racist. Except that we're feeding the algorithm data that comes from humans, some of whom are racists, so surprise surprise, the algorithm turns out to behave in a racist manner, which is shortened to just be "the algorithm is racist" (or classist or whatever).


Decision making for an AGI isn't going to be based on 10 billion reddit and 4chan comments. It's going to have its own decision making capabilities independent of the knowledge it has, and it will be capable of drawing its own conclusions from data and instead of relying on what other people's opinions are.

A language model today can be racist because it's predicting text, not making decisions. It hasn't decided that one race is inferior to another.


> While they are made up of humans

I don’t know why we always gloss over this bit. Corporations don’t have minds of their own. People are making these decisions. We need to get rid of this notation that a person making an amoral or even immoral decision on behalf of their employer clears them of all culpability in that decision. People need to stop using “I was just doing my job” as a defense of their inhumane actions. That logic is called the Nuremberg Defense because it was the excuse literal Nazis used in the Nuremberg trials.


The way large organizations are structured, there's rarely any particular person making a hugely consequential decision all by themselves. It's split into much smaller decisions that are made all across the org, each of which is small enough that arguments like "it's my job to do this" and "I'm just following the rules" consistently win because the decision by itself is not important enough from an ethical perspective. It's only when you look at the system in aggregate that it becomes evident.

(I should also note that this applies to all organizations - e.g. governments are as much affected by it as private companies.)


> I should also note that this applies to all organizations

Yes, including the Nazi party. Like I said, this is the exact defense used in Nuremberg. People don’t get to absolve themselves of guilt just because they weren’t the ones metaphorically or literally pulling the trigger when they were still knowingly a cog in a machine of genocide.


You're not really engaging with the problem. Sure, one can take your condemnation to heart, and reject working for most corporations, just like an individual back in Nazi Germany should have avoided helping the Nazis. But the fact is that most people won't.

Since assigning blame harder won't actually prevent this "nobody's fault" emergent behavior from happening, the interesting/productive thing to do is forgo focusing on collective blame and analyze the workings of these systems regardless.


> Sure, one can take your condemnation to heart, and reject working for most corporations, just like an individual back in Nazi Germany should have avoided helping the Nazis. But the fact is that most people won't.

I would argue that one reason most people don’t is because we are not honest about these issues and we give people a pass for making these decisions on an individual level. Increasing the social stigma of this behavior would make it less common. It is our society that led us to the notation that human suffering is value neutral in a corporate environment. That isn’t some universal rule.

I understand blaming society might not be seen as a productive solution, but the cause being so large does not mean any singular person is helpless. Society, like a corporation, is made up of individual people too. Next time you are in a meeting at work and someone suggests something that will harm others, question it.


I have found that companies that are owned by foundations are the better citizens, as they think more long term and are more susceptible to goals that, while still focusing on profit, might also take other considerations into account.


I like that. How do I set one up?


Yep. We've had AI for years - it's just slow, and uses human brains as part of its computing substrate.

Or, to look at it from another angle, modern corporations are awfully similar to H.P. Lovecraft's Great Old Ones.


Its not artificial though, its just intelligence.



Warning, this will steal 15+ hours of your life, and it's not even fun.


> all in the endless pursuit of money

Money is not the goal. Optimisation is the goal. Anything with different internal actors (e.g. a corporation with executives) has multiple conflicting goals and different objectives apart from just money (e.g. status, individual gains, political games, etcetera). Laws are constraints on the objective functions seeking to gain the most.

We use capitalism as an optimisation function - creating a systematic proxy of objectives.

Money is merely a symptom of creating a system of seeking objective gain for everyone. Money is an emergent property of a system of independent actors all seeking to improve their lot.

To remove the problems caused by corporations seeking money, you would need to make it so that corporations did not try to optimise their gains. Remove optimisation, and you also remove the improvement in private gains we individually get from their products and services. Next thing you write a Unabomber manifesto, or throw clogs into weaving machines.

The answer that seems to be working at present is to restrict corporations and their executives by using laws to put constraints on their objective functions.

Our legal systems tend to be reactive, and some countries have sclerotic systems, but the suggested alternatives I have heard[1] are fairly grim.

It is fine to complain about corporate greed (the simple result of our economic system of incentives). I would like to know your suggested alternative, since hopefully that shows you have thought through some of the implications of why our systems are just as they currently are (Chesterton’s fence), plus a suggested alternative allows us all to chime in with hopefully intelligent discourse - perhaps gratifying our intellectual curiosity.

[1] Edit: metaphor #0: imagine our systems as a massively complex codebase and the person suggesting the fix is a plumber that wants to delete all the @‘s because they look pregnant. That is about the level of most public economic discourse. Few people put the effort in to understand the fundamental science of complex systems - even the “simple” fundamental topics of game theory, optimisation, evolutionary stable strategies. Not saying I know much, but I do attempt to understand the underlying reasons for our systems, since I believe changing them can easily cause deadly side effects.


This is all correct, and the standard capitalist's party line. What it misses is conflating Money and Optimization. Money is absolutely the complete and only goal, and yes corporation Optimize to make more money. Regulations put guard rails on the optimization. It was only a few decades ago that rivers were catching fire because it was cheaper to just dump waste. There will always be some mid-level manager that needs to hit a budget and will cut corners, to dump waste or cut baby formula with poison, or skip cleaning cycles and kill a bunch of kids with tainted peanut butter(yes, happened).

But, your are correct, there really isn't an answer. Government is supposed to be the will of the people to put structure, through laws/regulation, on how they want to live in a society, to constrain the Corporation. Corporations will always maximize profit and we as a society have chosen that the goal of Money is actually the most important thing to us. So guess we get what we get.


> Money is absolutely the complete and only goal

If that were the case it would be easy to optimize. Just divert all resources to print more money.


This did use to happen. IN the 20's, companies could just print more shares and sell them, with no notification to anybody that they had diluted them. Until there were laws created to stop it.


"ah, the old lets play at being a stickler on vocabulary to divert attention from the point"[1]

Company shares are not money.

[1] https://news.ycombinator.com/item?id=35846017


So on one hand some argues money is not currency, and then turn around and say shares aren't money, but they are currency. They can be sold for money? right? It seems like splitting hairs to obfuscate the point that humans will commit fraud and destroy the world in order to optimize to make money. Just throwing up technicalities that 'shares' aren't money isn't changing the fact that many companies have their one and only goal to increase share price, which can be converted to money.


> So on one hand some argues money is not currency

That's not my argument, and also irrelevant to this post.

> say shares aren't money, but they are currency

They're definitely not currency, either.

> They can be sold for money?

That's an asset, not a currency. Those are two very different things.

> It seems like splitting hairs to obfuscate the point that humans will commit fraud and destroy the world in order to optimize to make money.

You were claiming that "companies used to print their own shares = print their own money" in support of your argument "humans will commit fraud and destroy the world in order to optimize to make money". That claim is false, so it doesn't support your argument, and your "point" is not a point because you've provided zero evidence for it.

> isn't changing the fact that many companies have their one and only goal to increase share price

What fact? What number of companies can you point to that factually have their "one and only goal to increase share price"?

I can say for sure that I've never seen a company that doesn't at least have two goals, and your statement is completely irrelevant for privately traded companies.

You seem pretty determined to push your worldview that "companies are evil" without much thought as to what that even means, or producing blatantly false claims like "we as a society have chosen that the goal of Money is actually the most important thing to us" (if you think that, you need to spend more time with real people and less on the internet, because the vast majority of real people do not believe this).

Go read the Gulag Archipelago and tell me how a system without companies or "capitalism" works.


It used to happen with actual currency, too, before the government took and enforced a monopoly on printing it.

https://en.m.wikipedia.org/wiki/Wildcat_banking


That would be fraud to investors, given investors own the company in a shared manner. If some investor approve printing new shares all investors should be notified. But there are no laws settings how many shares a company can print.


Yeah, this used to happen before there were laws. Laws are needed or humans will commit fraud.


You’re sort of reinforcing the point. Only laws prevent companies from running printing presses to print money.


Let me introduce you to free-banking https://en.wikipedia.org/wiki/Free_banking that made possible some of the most stable financial systems in history.


Money is not currency


ah, the old lets play at being a stickler on vocabulary to divert attention from the point. so lets grant the point that we could be using sea shells for currency, and that printed money is a 'theoretical stand in for something like trust, or a promise or other million things that theoreticians can dream up'. It doesn't change any argument at all.


To complete my thought. Yes Money is used as an optimization function, its just that we have chosen Money as the Goal of our Money Optimization function. We aren't trying to Optimize 'resources' as believed, that is just a byproduct that sometimes occurs, but not necessarily.


That seems backwards. There is an optimisation system of independent actors, and money is emergent from that. You could get rid of money, but you just end up with another measure.

> we as a society have chosen that the goal of Money is actually the most important thing to us

I disagree. We enact laws as constraints because our society says that many other things are more important than money. Often legal constraints cost corporations money.

Here are a few solutions I have heard proposed:

1: stop progress. Opinion: infeasible.

2: revert progress back to a point in the past. Opinion: infeasible.

3: kill a large population. Opinion: evil and probably self-destructive.

4: revolution - completely replace our systems with different systems. Opinion: seen this option fail plenty and hard to find modern examples of success. Getting rid of money would definitely be wholesale revolution.

5: progress - hope that through gradual improvements we can fix our mistakes and change our systems to achieve better outcomes and (on topic) hopefully avoid catastrophic failures. Opinion: this is the default action of our current systems.

6: political change - modify political systems to make them effective. Opinion: seen failures in other countries, but in New Zealand and we have had some so-far successful political reforms. I would like the US to change its voting system (maybe STV) because the current bipartisan system seems to be preventing necessary legislation - we all need better checks and balances against the excesses of capitalism. I don’t even get a vote in the USA, so my options to effect change in the USA are more limited. In New Zealand we have an MMP voting system: that helped to somewhat fix the bipartisan problem, but unfortunately MMP gave us unelected (list) politicians which is arse. The biggest strength of democracy is voting those we don’t like out (every powerful leader or group wants to stay in power).

7: world war - one group vying for power to enlighten the other group. Opinion: if it happens I hope me and those I love are okay, but I would expect us all to be fucked badly even in the comparatively safe and out-of-the-way New Zealand.


>Corporations are [intelligent agents non-aligned with human wellbeing], even without the assistance of AI.

Just to put a fine point on it...


And it's going almost unchallenged because so many of those who like talking about not all being rosy in capitalism are blinded by their focus on the robber baron model of capitalism turning sour.

But the destructively greedy corporation is completely orthogonal to that. It could even be completely held by working class retirement funds and the like while still being the most ruthless implementation of soulless money maximiser algorithm. Running on its staff, not on chips. All it takes are modest number of ownership indirections and everything is possible.


> Corporations are soulless money maximizers

This seems stated as fact. That's common. I believe it is actually a statement of blind faith. I suspect we can at least agree that it is a simplification of underlying reality.

Financial solvency is eventually a survival precondition. However, survival is necessary but not sufficient for flourishing.


Many corporations choose corporate survival over the survival of their workers and customers.

Humans shouldn't be OK with that.


So far as I can tell, most aren't. I think you're right that we get a better as well as more productive and profitable world if no humans are okay with that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: