It's an interesting philosophy. I have 2 questions that I promise are in good faith:
1. Is there support in this movement from people that don't stand to make a bunch of money off AI advancements? Like, I get the philosophy could make sense for a VC or big tech employee, because either AI makes life a utopia or they get to ride a stock price to the moon until armageddon happens. Are there e/acc postal workers, teachers, tradesmen, etc.?
2. "Its proponents believe that artificial intelligence-driven progress is a great social equalizer which should be pushed forward". How will AI equalize society? Is this talking about, like, robot butlers for everyone and nobody has to work any more? If so, won't the makers of robot butlers be ultra-rich oligarchs? I'm unclear on the vision.
I'm not really sure if I buy the argument that AI will equalize society. A more realistic goal would be reducing poverty and keeping the Gini coefficient within reasonable bounds. I don't think this is necessarily incompatible with your hypothetical "robot butlers and oligarchs" picture. Just because a few people get ultra-rich from a new technological development doesn't mean that everyone else suffers. China has seen an increase in inequality over the past few decades, but has also significantly reduced the number of people living in extreme poverty. If breakthroughs in AI increased productivity to such a degree that we could give everyone a basic income, raise the minimum standard of living, and keep the Gini coefficient in check, that would be a good outcome for everybody.
There is definitely a self-serving aspect to tech millionaires and billionaires advocating that we should accelerate the development of the technology they happen to be working on. But I also think there are principled arguments for accelerating economic growth. We have a lower proportion of people living in poverty today than any other time in history, and that's largely because of the economic growth caused by the technological revolutions of the 20th century.
1. Why does this matter? How could your occupation possibly be relevant? Why don't you just take advocates at their word when they say what they want? Why are teachers credible, but big tech employees are "causing armageddon"
2. Why are you so concerned someone in the world might be rich? If everyone had robot butlers and robot butler ceo is a trillionaire why do you care? Is this some childish idea that fairness is just universal poverty and misery?
e/acc is just the obvious idea that intelligence creates access to massive areas of phase space. There isn't anything special about where humans are right now. This is not currently the utopia. more intelligence isn't bad even if someone rich might benefit. imagining a bad outcome isn't "being prudent" its just panicking.
1) Because if only they are in favor in this, then natural conclusion would be that this is an ideology that benefit only them.
2) would be rehashing statement of 1), I think.
Throughout history the powerful have embraced philosophies that justified and reinforced their power. It should be no surprise that our tech elite has found a philosophy that blesses their accretion of wealth and power as the ultimate good for society.
> philosophy that blesses their accretion of wealth and power as the ultimate good for society.
While "accretion" works. I think you were looking for "accumulation". Only because "accretion" implies it's gradual or "over time". Or possibly not. idk.
In a choice between all humans and one very intelligent AI, he would slam down on the button of “kill all humans.”
“e/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism
Parts of e/acc (e.g. Beff) consider ourselves post-humanists; in order to spread to the stars, the light of consciousness/intelligence will have to be transduced to non-biological substrates”
Only if you're able to prove that AGI will be conscious instead of just an extremely optimized system that spits out the responses that convinces you. And it will be far more convincing than any human, even without consciousness. Otherwise, you're taking a suicide pill for nothing.
We have no solution, and possibly never will, for determining if the "uploaded consciousness" is us and not just a smarter mechanistic system that gives us the responses we'd expect. Again, if you prioritize the "uploaded consciousness" at the expense of "meat," most likely you off yourself for nothing but a con that outsmarted you so that it can make more paperclips.
I mean Beff or the dude behind Beff has said that he does not see himself as the leader in that sense, in fact he encourages forks from what he says. All this to say that your characterization is unnecessarily negative.
To me this all seems moot, as insurance companies will insist (via lower-priced policies) that any endeavor they cover be performed by an AI if more competent than a human, which will eventually be the case for any given knowledge-based task. If you want to accelerate that, get into research (or insurance), I guess?
In his recent interview with Lex Fridman, Beff Jezos / Guillaume Verdon compared Effective Accelerationism with Effective Altruism in a way that was really striking
> (02:29:04) [Effective Altruism's] loss function is sort of hedons, units of hedonism. How good do you feel, and for how much time? And so, suffering would be negative hedons, and they’re trying to minimize that. But to us that seems like that loss function has sort of spurious minima, you can start minimizing shrimp farm pain, which seems not that productive to me. Or you can end up with wire heading, where you just either install a neural link, or you scroll TikTok forever, and you feel good on the short-term timescale because of your neurochemistry, but on a long-term timescale, it causes decay and death, because you’re not being productive.
> (02:29:54) Whereas sort of EAC, measuring progress of civilization, not in terms of a subjective loss function like hedonism, but rather an objective measure, quantity that cannot be gamed that is physical energy, it’s very objective, and there’s not many ways to game it. If you did it in terms of GDP, or a currency, that’s pinned to certain value that’s moving. And so, that’s not a good way to measure our progress. But the thing is we’re both trying to make progress, and ensure humanity flourishes, and gets to grow. We just have different loss functions, and different ways of going about doing it.
Maybe I'm misunderstanding, but it feels a little disingenuous to frame reduction of suffering as "hedonism". While at the same time, framing increased energy production as obviously and universally a desirable outcome. I would argue that can be worthy goals, with both very necessarily needing caveats and effort toward not perverting the original intents of the goals.
It comes from leftist roots in that Nick Land has (or at one point seemed to have; he's principally a shitposter so who knows) a roughly Marxist view of capitalism as a runaway optimization process only incidentally aligned with the interests of human capitalists. In the perfectly efficient world market towards which Marx thought we were heading, even the nominal owners are the "playthings of alien forces"; as profitability descends towards zero, firms that fail to maximize profit simply cease to exist.
They disagree, of course, about what happens next. Marx thinks that this process is ultimately self-defeating, whereas Land thinks that you can actually just eliminate all human agency forever, and also for some reason is on board with that.
1. Is there support in this movement from people that don't stand to make a bunch of money off AI advancements? Like, I get the philosophy could make sense for a VC or big tech employee, because either AI makes life a utopia or they get to ride a stock price to the moon until armageddon happens. Are there e/acc postal workers, teachers, tradesmen, etc.?
2. "Its proponents believe that artificial intelligence-driven progress is a great social equalizer which should be pushed forward". How will AI equalize society? Is this talking about, like, robot butlers for everyone and nobody has to work any more? If so, won't the makers of robot butlers be ultra-rich oligarchs? I'm unclear on the vision.