Hacker News new | past | comments | ask | show | jobs | submit login
"Accelerationism" is an overdue corrective to years of gloom (thenewatlantis.com)
118 points by jseliger 3 months ago | hide | past | favorite | 195 comments



> [...] but to widespread public concerns about the risks posed by the tech industry at large. Effective accelerationists worry that these concerns have become so entrenched that they threaten to extinguish the light of tech itself.

Those "years of gloom" (which aren't very many years -- has everyone forgotten when the tech industry was widely seen in optimistic terms?) have been brought on by the behavior of the tech industry itself, in large part because of the misapplication of the idea "move fast and break things" (which is, unless I'm misunderstanding, the very essence of e/acc that this article discusses).

Our industry has been breaking a lot of things that people don't want broken, then tends to shame people for being upset about that. The problem isn't some inherent fear of tech itself, it's a (supportable) fear of the tech industry and what other things it may damage as time goes on.

If the industry wants to assuage these fears, the solution isn't to move even faster and break even more things, it's to start demonstrably acting in a way that doesn't threaten people and the things they hold dear.


I agree mostly, though I think the "break things" bit got twisted and misunderstood.

We were supposed to break; limits, barriers, status-quos, ossified ideas... Instead we broke; treasured social norms, privacy, mutual respect and dignity. There's a difference between benevolent innovation and reckless iconoclasm. I think it started the day Peter Thiel gave money to Mark Zuckerberg.


Picture of two little identical castles, towns, and armies, caption:

Their barbarous "barriers", "status quo", "ossified ideas"

vs.

Our blessed "privacy", "treasured social norms", "dignity"


The alternative to describing the meme here is to call it by name: a Russell conjugation.


Exactly. Words that seem different but mean whatever you want them to mean, including the exact opposite. tools for peace <--> weapons of mass destruction etc.



Ah the good old "there's literally no difference between good things and bad things" argument. Compelling.


Amorality. The refuge of the bewildered.


I understood it much smaller fwiw. As long as you can add useful features really quickly, it's fine if your website crashes every once in a while.


Yep. It came from Facebook, and it was changed to favor stability while moving fast almost a decade ago.

https://en.wikipedia.org/wiki/Meta_Platforms#History

>> '"On May 2, 2014, Zuckerberg announced that the company would be changing its internal motto from "Move fast and break things" to "Move fast with stable infrastructure".[40][41] The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough."[42]"'


Last night I changed some solid-js ui code to replace mutating game in ui state with updating ui state with mutated clones (cloning is efficient & shares most data, optimizations made for AI efficiency long ago)

ofc, with these stale game references around, I soon got reports of broken things: targeting was broken, pvp was broken, fade out animations were broken

A few hours later these issues were resolved. The players are used to these things happening sometimes. It's fine since the stakes are low. It's just a game after all. & being free, the active playerbase understands that they're QA


And, crucially, you'd generally be around to help fix the website.


I always thought move fast and break things used at FB was to empower the ambitious, talented, fresh crop of ivy-college grads with confidence to move forward with poor decisions due to lack of experience.


You’re closer to the truth, but with a bit of a harsh bias. It was simply permission to make mistakes. Sometimes you get it wrong, and it’s better to get more done and risk mistakes instead of moving cautiously.

Facebook was famously unit-test sparse, for example.


Or rather fail fast before we blow all our money only to find out our new product doesn't work


Only the best and brightest inexperienced developers :)


No? You’re projecting what you want it to mean. The “break things” is don’t be afraid to break functionality/features/infrastructure in the process of improving it (new features, new scaling improvements, etc etc). That’s why it was renamed “Move fast with stable infrastructure".

> The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough."

It’s about growth at all costs and then once Facebook got big enough they had to balance growth against other factors (+ the things people were doing that were causing breakages weren’t actually helping to grow).

https://en.wikipedia.org/wiki/Meta_Platforms#History


Mottos like that live their own life. Take google’s “dont be evil” - people remember that, and see all the evil shit google does now, of course they are going to recall the motto and laugh at the irony. Whatever Sergey meant when he coined the phrase is irrelevant imo.


> You’re projecting what you want it to mean

Maybe true. But then if it's just about development it's a rather mundane old chestnut about reckless engineering versus good software engineering etc. Granted, that's a different discussion and we can see the tide turning now in terms of regulation and mandated software quality.

Sure, the Post-Office/Fujitsu scandal, Boeing etc, show how bad software actually ruins lives, but for the most-part the externality imposed by the reckless software engineer is measured in "hours of minor inconvenience".

That said.. I wonder if you did a ballpark calculation of how much harm lies behind the Google Graveyard [0], whether the cost of what is broken outweighs the benefits of it ever having been made?

[0] https://killedbygoogle.com/


Engineering was literally taught to me in a well respected engineering university as making an appropriate cost/reward trade off and being careful in taking that risk. But the economics of the business were important too as it was part of the competition of driving more efficiency into a system. In classical engineering, there can be more risk because you’re dealing with people’s lives and so you have to be more careful and add extra margins of error even if more expensive.

One person’s recklessness is another person’s calculated risk. The consequences of FB engineering mistakes are minimal in both impact to customers and FB’s business. As FB scaled, the impact to individual people is still largely minimal (perhaps even beneficial) but the impact to their own business is larger and same for their customers if their ads aren’t getting eyeballs. So they shifted as big companies do. It’s kind of the best case of thoughtful risk taking - we’re rolling out a new system and we don’t know what could go wrong at scale and we put in monitoring of what we think we need. If there’s problems we’ll catch it with our monitoring/alerting and rollback or fix. You see the outages but not 99% of changes that go in without anything going wrong which lets the business resolve issues quickly and cheaply.

As for Boeing and Fujistsu, I’d say those are very different situations and aren’t an engineering problem nor do they indicate a move fast and break things mentality. As with many things like that, the engineering mistakes are a small detail within the overall larger picture of corruption. Boeing wanted to escape being classified as a new aircraft and met a perfect storm of skimping on hardware and corrupting the FAA through regulatory capture. I don’t fully understand Boeing’s role with the recent failures as a subcontractor is involved, but my hunch is that they’re nominally responsible for that subcontractor anyways. Same goes for Fujitsu - bad SW combined with an overly aggressive prosecution mandate and then cover ups around having made mistakes based on the assumption that the SW was correct rather than assuming new SW that hadn’t run anywhere before may contain bugs (not really sure whether Fujitsu hid the bugs or if politicians did or what happened but certainly the Post Office officials hid the reports of the auditors that found bugs in the sw and continued with prosecutions anyway).

Btw in engineering classes, all the large scale failures we were taught about involved some level of corruption or chain of mistakes. A contractor not conforming to the engineering specs to save on costs (valid optimization but should be done extra carefully), overlooking some kind of physical modeling that wasn’t considered industry standard yet, kickbacks, etc.


We probably had similar rigorous educations at that level. In SE we studied things like the '87 Wall St. crash versus Therac-25. The questions I remember were always around what "could or should" have been known, and crucially... when. Sometimes there's just no basis for making a "calculated risk" within a window.

The difference then, morally, is whether the harms are sudden and catastrophic or accumulating, ongoing, repairable and so on. And what action is taken.

There's a lot about FB you say that I cannot agree with. I think Zuckerberg as a person was and remains naive. To be fair I don't think he ever could have foreseen/calculated the societal impact of social media. But as a company I think FB understood exactly what was happening and had hired minds politically and sociologically smart enough to see the unfolding "catastrophe" (Roger McNamee's words) - but they chose to cover it up and steer the course anyway.

That's the kind of recklessness I am talking about. That's not like Y2K or Mariner-I or any of those very costly outcome could have been prevented by a more thoughtful singular decision early in development.


I’m talking strictly about the day to day engineering of pushing code and accidentally breaking something which is what “move fast and break things” is about and how it was understood by engineers within Facebook.

You now have raised a totally separate issue about the overall strategy and business development of the company which you’d be right about - if it were required to have a PE license to run an engineering company, Zuckerberg would have to have had his PE license revoked and any PEs complicit in what they did with tuning for addictiveness should similarly be punished. But the lack of regulation in any engineering projects that don’t deal directly with human safety and how businesses are allowed to run is a political problem.


I see we agree, and that as far as day-to-day engineering goes I'd probably care very little about whether a bug in Facebook stopped someone seeing a friends kitten pics.

But on the issue I'm really concerned about, do you think "tuning for addictiveness" on a scale of about 3 billion users goes beyond mere recklessness, and what do we do about this "political problem" that such enormous diffuse harms are somehow not considered matters of "human safety" in engineering circles?

Is it time we formalised some broader harms?


I think there are political movements to try to regulate social media. There’s lots of poorly regulated sub industries within the tech field (advertising is another one).


> Sure, the Post-Office/Fujitsu scandal, Boeing etc, show how bad software actually ruins lives, but for the most-part the externality imposed by the reckless software engineer is measured in "hours of minor inconvenience".

I've been deeply critical of the appalling behaviour of the Post Office and Fujitsu in the Horizon scandal but there's a world of difference between this and the impact of Facebook in 2009. One had a foreseeable and foreseen impact on people's lives. The other was a social network competing with MySpace and looking for a way to monetise its popularity.


> there's a world of difference between this and the impact of Facebook in 2009.

You're absolutely right there,

Frances Haugen's leaked internal communications showed incontrovertibly that internal Facebook research had long known teen girls had increased suicidal thoughts and obtained eating disorders. Facebook and Instagram products exploited teens with manipulative algorithms designed to amplify their insecurities, and that was documented. Yet they consistently chose to maximise growth rather than implement safeguards, and to actively bury the truth that their product caused deaths [0]. Similarly the Post Office had mountains of evidence that its software was ruining lives yet engaged in a protracted, active cover-up [1].

So, very similar.

But what's the "world of difference"?

> looking for a way to monetise its popularity.

That's a defence? You know what, that makes it worse. The Post Office were acting out of fear, whereas Facebook acted out of vanity and greed. The Post Office wanted to hide what had happened, whereas Facebook wanted to cloak ongoing misdeeds in order to continue. Simply despicable.

Way I see it - Facebook come out looking much, much worse.

[0] https://www.npr.org/2021/10/05/1043377310/facebook-whistlebl...

[1] https://www.bbc.com/news/business-68079300


The NPR link is from 2021, not 2009. It links out to research from 2019, still not 2009. In 2009, Facebook was still branching out among university students.


IDK, I think a big part of the "years of gloom" was an official (but secret) NYTimes policy of only publishing negative stories about tech, as confirmed by Vox journalist Kelsey Piper. [1]

1: https://twitter.com/KelseyTuoc/status/1588231892792328192


I just wonder if that can be extended to every. single. news. organization.

They all do it. Doom gets more clicks than happy talk.


Here’s the real funny bit: the whole “doom gets more clicks” thing is itself a consequence of the ultra-competitive attention marketplace that tech created.

Sure it’s been “if it bleeds it leads” for a long time, but not until digital advertising has it been “if you’re not bleeding profusely 24/7 then you are going bankrupt.”


They wouldn't need a specific policy for tech if that was their general policy


This depends if 'tech' said something like "screw the media, we're going to replace it with media2.0". That could take you from the normal doom and gloom (that can be dispelled with the right ad buys) to a hated enemy that must be destroyed at all costs.


IMO the Verge actually strikes a pretty good balance of writing about tech like the old days of Wired, while still being willing to call out bad behavior when they see it.


I mean we're acting like tech is just a net good with this kind of attitude. Monopolies and surveillance hurt people. Facebook caused a genocide in Myanmar. They wouldn't publish anything if it was all good.


I'm confused by their idea that insider criticism and self-reflection is a sign of collapse rather than maturity in this case. If professional architects find value in working together to develop and adhere to standards before building a city (which I'd consider "technology") it is not a warning sign that architecture will cease to exist.

Gene drives, nuclear weapons, space organisms getting to earth, self-reflection on potential risks is not a sign of failure unless your only goal is to move. We only get to make some of these choices, as an entire global civilization, once.

AI is not exactly in that category but it's sure not a sign of failure that the people making it are actually considering the results before just making decisions with world-scale impacts. It slows things down but that is important when decisions only get to be made once and cannot be reversed. That was less the case with earlier tech, so I don't understand the surprise at the difference in scrutiny over time.


> self-reflection on potential risks is not a sign of failure unless your only goal is to move.

(my emph)

The root of this is a deflated concept of "progress". Progress is a vector. It has a magnitude and a direction. And it has a context that entirely defines its value. You would not be happy if the doctor told you you have a progressive illness. But "progress" gets used carelessly as a bare noun.


Ultimately it's the purse string holders who want to move fast and break things. Investors are the people who would rather try to shove ten figures into undercutting taxi markets everywhere to try to build a monopoly. Imagine if instead they'd put that into cancer treatments and diagnostics or novel forms of energy generation. Move fast and break things is shit compared to building new things at the centre of human need and at the edge of human understanding.


Uber is a particularly silly example due to the sheer volume of money they burned, but it did prove that the taxi medallion monopolies were suppressing market opportunities. I don’t believe that the founders or VCs really understood the available market potential, but it was 100% a farce that “ride sharing” was a meant for “sharing”.

Taxis (incl Uber) are bigger now than they have ever been. Add in Uber Eats and co., and there’s so much new demand for similar services. Even Amazon flex has shown that car based services have unmet demand and utility.

The taxi industry should have been broken down. It was good for humans to have better access to the market, even if it wasn’t cancer treatment. There’s tons of money going to research cancer. New energy opportunities are a huge source of investment already, let’s not pretend the existence of Uber foreclosed the opportunity to cure cancer.


Mafia with chips extracting maximum value from every vital industry comes to mind..

Tech has shown its cards and people hate it for a reason.


Every time we try to do something somebody is in the way. We can have reasonable conversations with these people but when their arguments deevole to screaming, yelling about Adam and Eve not Adam and Steve or throwing food at paintings you have to ask yourself how long before you push the (metaphoric) peddle to the metal and keep going.

We can’t deal on an adult basis with children, no matter their age, how long do we have to let them stop us?


There is no possible path for advancement that doesn't threaten people and the things they hold dear. It never worked that way in the past, and it won't now.


Yes, but there's a critical difference now. Now, the tech industry breaks many things at an unprecedented pace, and largely doesn't offer a reasonable replacement for the things that have been broken.

People can only handle a limited amount of loss within a given period of time before they start pushing back hard against further loss and consider those causing them harm to be forces of evil.

There's also another factor that the tech industry is largely blind to: tech people tend to think that "we know best" and that pushing our ideas on the general public against their will is a Good Thing. But it's not a Good Thing, it's a Bad Thing.

Another thing we need to be doing is allying with the general public rather than dictating to them.


Who is pushing anything on the public? The tech industry wouldn't exist in the form that it does now except that it gives people something they want, not the other way around.

Disruption from tech advancement is caused by tech changes displacing existing industries and it hurts the people currently making money from those industries. But to be against that disruption you would have to believe that those people have some sort of right to make that money and continue doing the things that make them those profits when the public wants the more efficient tech. So really it's the anti tech people who are pushing things on the public.

E.g. people often complain about Amazon displacing small retailers, but really it's just that given the choice, most people choose Amazon.


> except that it gives people something they want, not the other way around.

That used to be true. Now, though, a very common thing I've noticed with people is that they use tech not because they want to or because it solves a problem for them, but because they are disadvantaged if they don't.

It's an important difference. If people willingly choose to use a thing, then they'll be inclined to think about it positively. If they use a thing because they feel they have no choice, then that thing is more likely to be viewed as adversarial, because it is.

I think that's largely where the tech industry has arrived at. Further, the tech industry shows little to no empathy to those whose lives are worse because of what it does.


People may feel that way, and I'm sure in some cases they really mean it. But the reason they always give for why they have to use it is some form of "because every one else does." And it had to get to that point because people wanted it in the first place. Otherwise it just wouldn't have sold in the market when it came out.


The costs of a thing are usually not apparent when it is new. All that's apparent is the benefits. The costs rear their ugly head later.

So yes, often people jump onto a hot new thing because all they can see are the benefits. The "buyer's remorse" doesn't come in until later, when the downsides become apparent. At that point, it's often too late and people are trapped. By design.

The tech industry counts on this effect, and doing that is one of the bad behaviors that encourages people to distrust the industry and become angry at it.

All I'm saying is that people are growing increasingly distrustful of, and angry at, our industry for really solid, rational reasons.

The most charitable interpretation I can think of for why we allow this to be is that the most visible part of our industry has become so insular and divorced from society in general that they can't even understand the anger or why it's rational. The least charitable interpretation I can think of is that they know perfectly well why people are getting mad and just don't care, because not caring increases short to medium term profitability.


You're very close to describing the "enshittification" process.

Things start out good quality, high effort. Useful.

Then once they achieve a certain amount of inertia they start cutting stuff out. Adding new tiers of payment plans, or injecting advertising into existing plans. Lowering caps. Whatever else they can get away with to cut costs but keep your money.

But people have invested in them by that point. Invested enough that changing off is painful and potentially expensive. They want to barrier to leave to be high, even as they give you less reason to stay using their product.


Not sure you even need this product cycle to explain the parent poster's observation.

What I see here is your basic Tragedy of the Commons at multiple levels. Consumers adopt the new thing to disadvantage their peers, who then have to do the same, to the detriment of all. And the vendors are doing the same thing.

This whole thread reminds me how much "tech" as a meme has come to conflate technology and business. People don't even seem to recognize that "move fast and break things" expressed a business philosophy, not some fundamental truth of technology, R&D, or science.


You're ignoring the fact that "it" (the tech in question) can change, and that there's a large motivation to monetize/cut dev spending/etc. once users are entrapped via network effects.

Edit: Sibling "enshittification" comment conveyed it better.


> Who is pushing anything on the public?

We are. Constantly. We are, intellectually, in a tiny minority who find these things delightful and empowering. We assume that must also be good for everyone else. I was building electronics as a five year old when the other kids were playing outside and it thrilled me so much I assumed everyone else thought the same. They didn't and they don't. Maybe us nerds "took over the world", but as an adult I find almost everybody else (those we call normies) feel that digital technology is;

- something that happens to them

- is foisted upon them and they have no choice

- something they "have to trust"

> it gives people something they want

Have you considered that you really have no idea "what people want"? Neither do I, but I do know that and feel comfortable saying it. And I have done research and literally gone onto the streets interviewing lots of people to ask them. Most want what they think their friends want. Or the thing they already have with some new features. We tell them and they buy.

> to be against that disruption you would have to believe that those people have some sort of right to make that money and continue doing the things...

In a funny way they kind of do have that right. UDHR includes several aspects that can be taken as a "right to stability".

> it's the anti tech people

I don't encounter any "anti-tech people". Ever. I meet plenty folks who are anti-surveillance, or anti-authoritarian, or anti-asshole - against people forcing their technology on them - but I've never met anyone who thinks it's simply the fault of technology itself. You may be living in a bit of a bubble?


you are mistaken if you believe your knowledge of tech gives you power over it.


> Who is pushing anything on the public?

Software is pushing updates on us that nobody asked for in the exact moment we need them least, everything between the operating system to websites shove ads in our face, and features & "modern UI" that remove important information and options because designers thought a UI can't be confusing if it barely exists. Privacy violations pushed with countless license agreements and the daily "We value your privacy" popup that explains to you in what ways this sentence is a lie.

> it gives people something they want, not the other way around

I have never heard someone ask for slow and broken software, ads, tracking and other shady practices. They just have to live with it because what are they gonna do, not communicate with friends or file their taxes?


> "When a new item of technology is introduced as an option that an individual can accept or not as he chooses, it does not necessarily REMAIN optional. In many cases the new technology changes society in such a way that people eventually find themselves FORCED to use it."


Tech is like a fission reactor: powerful, elegant, delivering value through leverage, but requires strong controls and protections (moderators, containment) for humans so it doesn’t ruin us all.

People worry about AI paperclip maximizing, but Tech is already that in some ways (find or build moats, blitz scaling, no concerns for the harm incurred). It’s just fuzzy cohorts of tech workers and management doing the paperclip maximizing, for comp and shareholder value respectively. Not much different than AI reward functions.


"Advancement" implies improvement. Just because things are changing does not mean they are improving.


Yeah, it's a misguided and naive way of thinking. Deciding whether a technological development is good (and for whom, and to what extent, and with what trade-offs, and on what time horizons) is a really difficult task. So some folks will replace it with a much easier question: "Is this new?"

https://en.wikipedia.org/wiki/Attribute_substitution


Leaded gasoline is a good example of an advancement where the naysayers were right.


One of the very very very few. Asbestos is another one. Would you be able to provide another example?


How about glyphosate (roundup)?

Also, plastic is looking far worse now than twenty years ago though I think the net is quite complicated and therefore ambiguous at the moment.

There is an ongoing discussion about “forever chemicals” and, again, not unambiguous but the balance seems to be tilting toward them being a bad idea.

I’m not personally seeing much of a dividend from nuclear weapons given how difficult nuclear power turns out to be under capitalism in practice. But I suppose it gets a pass because otherwise my father might have died in a land war.

Remains to be seen what the net will be on oil but I’ll happy speculate that if you consider a sufficiently long timeline that one turns out bad too.

I’m still pretty mad about the “food pyramid” but I can’t offer any particular study that tries to quantify its role in the decline of american health outcomes. Certainly modern food processing techniques look like a slow moving disaster but it’s really hard to sort our cause and effect.

Social media was neat for a few years but I would consider it a net negative.

I guess you’re right.


CFCs, PFAS, BPA, phthalates, thalidomide, leaches, bloodletting, lobotomies, pretty much the entire history of mental health treatment, hydrogen airships, vermillion pigment, mercury felt stabilizer, radium water... If I were feeling particularly spicy I might even suggest things like weapons research, communism, or suburbia.


How bad actually are hydrogen airships? At this point trying to make tiltrotors any safer is not working out, so airships could be better. Though, if you have a real airport, passenger jets are unbeatably safe.


Fair point! Probably should have left them off. Despite the rather famous failing, they've had a lot of utility, and the technology is still in use today. (I think there's at least one YCombinator startup using hydrogen airships.)


"Data is the new oil"


Kinda the point, no? If history shows progress is disruptive, then accelerationism seems likely to accelerate disruptions. Many people can connect these dots, and not everyone sees this as positive.


If what people hold dear is controlling the way people live hundreds or thousands of miles away, then you're right.

But that is a deliberately obtuse definition designed to justify any behavior.

If you let people continue their traditions, don't deliberately bankrupt them, and allow them make their own local laws, that's enough for most.


"Or, perhaps, wanting to be regulated is a subconscious way for tech to reassure itself about its central importance in the world, which distracts from an otherwise uneasy lull in the industry."

There is that. There hasn't been a must-have consumer electronics thing since the smartphone. 2019 was supposed to be the year of VR. Fail. 2023 was supposed to be the year of the metaverse. Fail. Internet of Things turned out to be a dud. Self-driving cars are still struggling. All those things actually work, just not well enough for wide deployment.

LLM-based AI has achieved automated blithering. It may be wrong, but it sounds convincing. We are now forced to realize that much human activity is no more than automated blithering. This is a big shakeup for society, especially the chattering classes.


Holy crap, I'm realizing it's been 4 years since Half Life Alyx. I really wish it had been the first of many.


LLMs make people recognize BS jobs for what they are, a little harder


LLMs and generative AI are on par with the spreadsheet for impact on office work. It's just that most people who be obsolete are in denial, again.


No, they are not. I was there in the 1980-1990s when spreadsheets hit. After the word processor, the spreadsheet became ubiquitous. Along with email, they were at the center of the personal computing revolution.

These days AIs/LLMs are still rarified air. People do use AIs built into a SaaS product (auto-summaries, etc.), but it's still the minority. Some others are becoming facile prompt jockeys. A rarer few experts will run their own models on local laptops and servers.

But it is intrinsically complex technology that most users don't really "grok" in terms of how it actually works. It is fundamentally very, very, very different than the spreadsheet. And its adoption will have natural limits and boundaries.


> But it is intrinsically complex technology that most users don't really "grok" in terms of how it actually works. It is fundamentally very, very, very different than the spreadsheet. And its adoption will have natural limits and boundaries.

I know people who do office jobs unrelated to tech who have slashed their workloads in half using LLMs.

What do you mean complex technology? You just type plain English into a prompt; it can't get less complex. Have you seen how complicated spreadsheets are?


Yes he did mention "Some others are becoming facile prompt jockeys."


Are they? While I love LLMs, I don't find them extremely useful for much more than faster API documentation.


Or wasting time going through the crap that google throws up in its search results these days. I find it faster to just ask GPT when I forget some command argument. LLMs have basically replaced the web search engine for me in most day to day cases now.


How do you handle the lying/hallucination problem? Do you just run the command and hope?


The stuff I use it for is usually to recall or find simple information rather than creating textual content. Stuff like looking up some linux command that I've used before, but can't recall specific parameters/arguments or generating code to get me started on something. So I don't see the hallucination issue much. There have been cases where it has pulled some outdated information tied to a specific library I'm using and I asked it to generate some example code on how to use that library. But with GPT 4 , I don't see that often now.

Now Google's Bard/Gemini on the other hand quite frequently makes up stuff. So for now I'm sticking to GPT 4 via Chatgpt plus subscription to augment my daily dev work.


> Stuff like looking up some linux command that I've used before, but can't recall specific parameters/arguments

So, to repeat the question:

Do you just run the command and hope? Or do you double-check using the manpage that it isn't going to do something drastic and unexpected?


I see what you mean. Yes, I do verify it or do it in an environment where a mistake isn't going to cripple an application or cause some crisis. But in most cases once it points out the argument for the command or the command name - that usually jogs my memory to know that its correct. Lately it's been mostly when creating dockerfiles and also stored procedure syntax. I'm not really good with keeping notes.


Anyone who still talks about hallucinations today hasn't used a paid service in the last 6 months.


I just had a paid open ai service tell me all about a command line argument that doesn't exist.

It isn't possible to do what I wanted with the proposed command, but the hallucination helped me to Google a method that worked.


What do you mean? Hallucinations are unavoidable, even humans produce them semi-regularly. Our memories are not nearly reliable enough to prevent it.

In my experience the only more or less reliable way to avoid hallucinations is to provide the right amount of quality information in the prompt and make sure the LLM uses that.


I've used them to:

- Make edits to a Latex file which would have taken me at least an hour longer to do.

- Reverse-compile a PDF into Latex from a mere copy-paste.

- Translate a travel diary from French into English.

- Ask conceptual questions in difficult areas of mathematics. It's unreliable but it often has a "germ" of an idea. This is backed up by the Fields medalist Terence Tao.

- Helped me tutor someone by giving model solutions to homework and exam problems when I wasn't sure.

- Write a browser extension to do certain content blocking/censoring that none of the programs on my computer could do. I've never written a browser extension before, and this would have taken me a day longer.

- Give me feedback on emails I wrote.

- Helped me deal with a medical emergency.


Those are good ideas, thanks. I don't like its writing, I find it stilted and awkward, but it's good if you want something that's not going to dissatisfy anyone.

I've also used an LLM for some of the elements on that list, though it seems like I should use them more.


"Years of gloom"?

For my entire career in tech, anything that isn't the most extreme of optimistic view points has been relegated to unnecessary and destructive pessimism.

Tech can't stand "bad thoughts". We had a very necessary correction in this industry but people couldn't handle it and just completely lost their minds on AI.

I say this as someone who works in AI very close to the metal (i.e. not just calling an API all day). While these tools are very impressive, when I talk to people outside of my area it's like they've all taken obscene quantities of some new stimulant and can't even connect with reality anymore.

Even the AI "doomers" are, by my view, extreme optimistic because they don't see, or maybe just don't want to see, how much all of the public discourse on this subject is largely smoke and mirrors to drive yet another tech bubble.

Personally I think this is just mass hysteria created by the increasing awareness of the fundamentally unsustainable nature of industrial society. Reading this article is like listening to some hallucinating maniac on the streets of SF scream at you about things that really aren't there.


The view from outside of tech is very much "We went from having no AI to having ChatGPT overnight! Imagine how soon the future will be here, we could have a breakthrough for AI at breakfast tomorrow and talking companion robots the day after that!"

And the doomers just add on "and the week after that we'll all be dead!"

I'm personally not very impressed by the AI tools I've used. Sure they're a neat toy. They do seem to keep getting better. Maybe they'll be good enough one day for me to actually want to use them and feel it's a benefit.


> The view from outside of tech is very much "We went from having no AI to having ChatGPT overnight!"

In their defense, for most people that is what happened. Sure, they've been using "AI" tools of varying degree for a long time (spellcheck, language translation), but now they have totally free access to something that behaves sorta like the AIs they've seen on the popular Star Trek shows -- where, incidentally, the AIs were also imperfect.


Journalists love their overnight success stories. And it's most apparent when they are talking about some musician who worked their ass off for decades before being discovered. As if they sprung into existence as 30 year-olds, worked hard for a couple years, then became millionaires.


> I'm personally not very impressed by the AI tools I've used. Sure they're a neat toy. They do seem to keep getting better. Maybe they'll be good enough one day for me to actually want to use them and feel it's a benefit.

Unless you explain this statement, most people here are likely to dismiss everything you have to say on the topic of AI.


We all know exactly the kinds of bad outputs we've seen from AI.

I just ran a query for ChatGPT to recommend various databases for a specific use case. Out of the seven databases it recommended, only one was a database actually appropriate; one suggestion was marginally acceptable; three of the recommendations weren't even databases.

I then asked it to provide a list of the most important battles in North Africa prior to the entry of the United States into World War 2.

It gave me five answers. Three of which occurred after the entry of the United States into World War 2.

AIs provides extremely plausible answers. Sometimes it will actually generate correct, useful output; but you cannot yet rely on it for correctness.


I'd like to see a side by side comparison with a random human on the street. Maybe with a sample size of 100 or so. How well do you think the humans would do vs whatever outdated model you were playing with here?

There is clearly significant value to this tech and I'm still dumbfounded how strongly some people try to deny it.


Anyone reckon there's a chance that GPT hallucinates because it was trained on online material (e.g. reddit and other forums)? I'd have to say on topics I know GPT is about as hit or miss as a random internet comment, especially in that they'll both give a confidently stated answer whether the answer is factual or not.

Is it possible GPT just thinks[0] that any answer stated confidently is preferable over not giving an answer?

Promise I'm not just being snarky, legitimate wonder!

[0]: I know it doesn't actually think, you know what I mean


You're judging fish by its ability to climb a tree. Being able to recall facts is a nice side effect for LLMs, not their bread and butter. If you need facts, plug in some RAG to it.

Also, what model did you use?


Stop using GPT 3.5 and complaining it's no good. We all know that. Unless you're using GPT4, your anecdotes are out of date and irrelevant.

https://chat.openai.com/share/43ebf64e-34ae-402b-a3ce-0787e2...


It is really why I am tired of this whole format of talking to people.

I wish someone would build a site that we share chatGPT4 outputs like the above.

I can't remember the last time I have actually learned something from a human on a message board like this compared to chatGPT4.

Talking to people like this is just 100% waste of time.


Well, learning and conversing are different things done for different reasons.

I'll agree that GPT4 has completely replaced Google, stackoverflow, etc for me.

The only time I use Google now is for janky more human like situations.

For example, today I had to transfer DC roles from a Windows 2012 R2 server to a new 2022. They only have one DC. And the old DC has a basically unused CA service set up.

ChatGPT would have me "fix" everything first, wheras, I did find a forum post with a situation almost identical to mine that helped me cowboy it rather than being overly meticulous.

There is still value to human experience. For now.


Unless you explain this statement, most people here are likely to dismiss everything you have to say on the topic of AI.


fwiw, I agree with them. It has its use cases, but 'hallucinations' or whatever you want to call them is a huge dealbreaker for everything I'd want to use ai for


Agreed but in my opinion the problem is more fundamental than just hallucinations, it involves plain inaccuracy and inability to reason.

Try asking chatgpt or Gemini about something complex that you know all about. You’ll likely notice some inaccuracies, or thinking one related subject is more important than something else. That’s not even scratching the surface of the weird things they do in the name of “safety” like refusing to do work, paying lip service to heterodox opinions, or interjecting hidden race/gender prompts to submodels.

It’s good at generalist information retrieval to a certain degree. But it’s basically like an overconfident college sophomore majoring in all subjects. Progressing past that point requires a completely different underlying approach to AI because you can’t just model text anymore to reason about new and unknown subjects. It’s not something we can tweak and iterate into in the near term.

This same story has recurred after every single ML advance from DL, to CNN + RNN/LSTM, to transformers.


> Agreed but in my opinion the problem is more fundamental than just hallucinations, it involves plain inaccuracy and inability to reason.

> Try asking chatgpt or Gemini about something complex that you know all about. You’ll likely notice some inaccuracies, or thinking one related subject is more important than something else. That’s not even scratching the surface of the weird things they do in the name of “safety” like refusing to do work, paying lip service to heterodox opinions, or interjecting hidden race/gender prompts to submodels.

For sure.

On the other hand, I recently started to convert these AI hallucinations into a feature for me: it is like asking a person who is somewhat smart, but high on some hallucinogenic drug, on their opinion on a topic of your interest. Depending on the topic and your own intellectual openness, the result can be ... interesting and inspiring.


I’ve asked Bing about sexual health questions and been told it’s not okay to talk about that.

I asked it a question about Christianity and it stated things with the tone and certainty of a preacher.

Just gross. And worse to put something like that in the hands of billionaires.

I am somewhat of a doomer not because I think it will be the Terminator. More like AI theocracy here we go.


Generally agree with the GP, and am curious what use-cases you've found where AI meaningfully improves your daily work.

I've found two so far: the review summaries on Google Play are generally quite accurate, and much easier than scrolling through dozens of reviews, and the automatic meeting notes from Google Meet are great and mean that I don't have to take notes at a meeting anymore.

It did okay at finding and tabulating a list of local government websites, but had enough of an error rate (~10%) that I would've had to go through the whole list to verify its factualness, which defeats a lot of the time savings of using ChatGPT.

Beyond that: I tried ChatGPT vs. Google Search when I had what turned out to appendicitis, asking about symptoms, and eventually the 5th or so Google result convinced me to go in. If I had followed ChatGPT's "diagnosis", I would be dead. I've tried to have ChatGPT write code for me; it works for toy examples, but anything halfway complicated won't compile half the time, and it's very far from having maintainable structure or optimal performance. Basically works well if your idea of coding is copying StackOverflow posts, but that was never how I coded. I tried getting ChatGPT to write some newspaper articles for me; it created cogent text that didn't say anything. I did some better prompting, telling to incorporate some specific factual data - it did this well, but looking up the factual data is most of the task in the first place, and its accuracy wasn't high enough to automate this task with confidence.

Bard was utter crap at math. ChatGPT is better, but Wolfram Alpha or just a Google Search is better still.

In general, I've found LLMs to be very effective at spewing out crap. To be fair, most of the economy and public discourse involves spewing out crap these days, so to that extent it can automate a lot of people's jobs. But I've already found myself just withdrawing from public discourse as a result - I invest my time in my family and local community, and let the ad bots duke it out (while collecting a fat salary from one of the major beneficiaries of the ad fraud economy).


I recognize your username so I know you've been around for awhile (and are you a xoogler who for a time banged the drum on the benefits of iframes, or am I confusing you with a similar username?), and so I'm kind of surprised at your lukewarm take on LLMs.

I agree they hallucinate and write bad code and whatever, but the fact that they work at all is just magical to me. GPT-4 is just an incredibly good, infinitely flexible, natural language interface. I feel like it's so good people don't even realize what it's doing. Like, it never makes a grammatical mistake! You can have totally natural conversations with it. It doesn't use hardcoded algorithms or English grammar references, it just speaks at a native level.

I don't think it needs to be concretely useful yet to be incredible. For anyone who's used Eliza, or talked to NPCs, or programmed a spellchecker or grammar checker, I think it should be obviously incredible already.

I'm not sold on it being a queryable knowledge store of all human information yet, but certainly it's laying the inevitable future of interacting with technology through natural language, as a translation layer.


> GPT-4 is just an incredibly good, infinitely flexible, natural language interface.

An interface that's incredibly difficult to produce consistent output. As far as I know, we have not found a way to even make it do basic tasks parsed from natural language without a prohibitive-to-most-use-cases error rate. It's amazing that it can produce pretty believable looking text, but it's abundantly clear that there's no reasoning behind that text at all.


The other day I planned out a cloud to on-prem migration of an entire environment. From cost analysis to step by step checklists. In about 2 hours I had a ~50 page run book that would have taken me at least a week to do coming from my own brain and fingertips.


How? Are you letting it make decisions for you?


Here is my initial draft chat session. From here I feed it parts of this initial thing. It gets something down on the page immediately and I revise myself and by feeding portions into new chat sessions etc.

https://chat.openai.com/share/15b30c88-d21f-4ffe-8c15-5b444d...


Good reminder than no social media platform is a monolith. Trying to speak as the voice of a platform typically gets you egg on your face, especially when being dismissive towards someone else.

You’ll find people who claim to have doubled their productivity from ChatGPT and people who think it’s useless here.


That’s their loss isn’t it?


Count me unimpressed too

Bing Copilot can’t even answer simple questions about businesses in a particular city without lying. When confronted, it will apologize and then repeat the same text verbatim.

Zero reasoning happening.


> the fundamentally unsustainable nature of industrial society

Nothing that goes against entropy is fundamentally sustainable. That doesn’t mean we can’t keep it going for time that is essentially infinite on human scales (unsustainable life has been kicking around on Earth for 3.7 billion years now). Defeatism is even more dangerous than hallucinating optimism.


Do you mind explaining what your comment means in layman's terms?


Every process of life - breathing, moving, learning - takes energy. This is as true for the ants building their colonies as it is for us building our cities.

Due to thermodynamics, this is a one-way process. In our case, it’s all fueled by the sun, which will keep burning for another few billion years.

But even that counts being not not truly sustainable, since one day it must end. The whole universe is destined to die a cold heat death where nothing at all happens anymore.

But that is a long time away and until then, we can build a beautiful civilization; we can learn and grow; and we can do so with nuclear & solar power.

People saying that we’re all as good as dead are technically correct, but in a very unhelpful way.


> Tech can't stand "bad thoughts". > it's like they've all taken obscene quantities of some new stimulant and can't even connect with reality anymore

It's hallucinogens. That's the drug of choice in tech. What that leads to, other than the obvious hallucinations, is an obsession with set and setting. Bad vibes will literally give you a bad trip and ruin your high. I've seen this with wealthy shroom heads over and over again. Their entire life becomes centered around making sure the set and setting are always perfect, which means any little bit of someone trying to talk some sense in to them, they get ignored.

Once you start to think through the kinds of behaviors that shroom addicts would start to engage in, especially if they had the wealth and resources to facilitate their addiction, you'll see it everywhere. It's not the typical "ruin your life" kind of addiction, but it's having an impact on what kind of ideas are allowed among the SV venture types.


I couldn't speak for silicon valley, but the personality changes you're describing as being associated with shrooms... those are not typical.

I have a few friends in psychedelic assisted therapy and the effects I've noticed in them are the same effects I've noticed with regular psychedelic use outside of therapy: you don't identify the bad vibes and shy away from them, you end up making decisions that are uncomfortable in the sort term to improve things in the long term. Myself, I started going to college.

Maybe it works differently among wealthy people.

If you're in a cult of positivity, adding psychedelics to the mix is more likely to make you acutely away of the inauthenticity of the situation.


Psychedelic assisted therapy is so good an powerful precisely because it exploits the best case scenario of set and setting with the way the drug has an impact on your mind. I'm far from against psychedelic and have done the therapy myself.

However, if you're in a positivity cult, and don't realize you're in a cult, and don't have someone guiding you to consider you might be in a cult, the shrooms are just as likely to make the cult seem like the most profound and important experience in your entire life.

If you are a lead in a company, you suddenly have a profound spiritual experience based around your ability to hire and fire people and tell them what to do and can use the drug to convince yourself that the ideas you are coming up with are the most profound thoughts a person has ever had. You won't even realize you're shutting out good ideas, because you have a messianic belief in AI or crypto or whatever the thing is, and you take the shrooms to reinforce that belief and you create an environment around yourself and put people around you that reinforce that belief.

It's a very different experience than going to therapy to work through your fear or depression.


> It's hallucinogens. That's the drug of choice in tech

To the degree Silicon Valley has a drug right now, it’s ketamine. (Before that, it was weed.)


HGH and low dosage testosterone regimes seem popular as well.


Seriously? That just seems ridiculous. Is this an aging thing, where folks are worried about low testosterone or something?


A decent amount of AI doomerism I've seen is classic criti-hype (as coined in [1], afaik)--it's promotion of the technology thinly disguised as critique. Fear mongering about AIs taking artists' or writers' job is yet another way of boasting about their capabilities.

[1] https://sts-news.medium.com/youre-doing-it-wrong-notes-on-cr...


"My product is so good the world should be afraid of it" often comes from the mouths of the AI CEOs.


Same with CEOs who say "Our tech is too powerful, we want to be regulated" while knowing that Congress can barely function to pass a budget.


I was really starting to feel like Abe Simpson at one point and then I read From Good to Great. The central thesis of that book is that you might work at a good company that avoids confronting 'brutal facts' but you will never find a Great company that does.

You have to slay bad things to make a great company, and you have to identify them before they're chewing on something vital. Which means going out of your way to speculate about whether things are actually bad or just annoying.


> when I talk to people outside of my area it's like they've all taken obscene quantities of some new stimulant and can't even connect with reality anymore

Can you share some examples of what you mean by this? I've encountered people who are excited about "AI", for sure, but who are excited because a problem that has plagued them for years suddenly became solvable. Excited because the way they used to learn about things just suddenly changed. Excited because there's a better way to do research than to scour the crappy results provided by the big search engines.

There is absolutely an element of hype that far outstrips the reality, and after the single-function apps like "Summarize this paper for me" are monetized to death, we'll enter the trough of disillusionment, and some reality will set in. But there are absolutely transformative use cases unlocked by the latest generation of tools that are very real, exciting, and that enable a new generation of users to interact with computers in a way that was previously the stuff of science fiction.

I've spent much of my career working on boring enterprise tech, building software that's not flashy, but gets work done. The problem space is extremely large, and the solution space is extremely inadequate. It's this vast space of unsolved business problems that make some of this hype more real, IMO. LLMs in particular will be transformative for many of the problems the big enterprise platforms solve, and have the potential to solve some of the messiest parts of operating in that space.

I don't know that I agree with the "Years of gloom" characterization, and I agree that there are other underlying currents re: sustainability, but I can't help but feel this comment endorses a different problematic extreme. In the middle of the hysteria are real uses cases that will change how we interface with computers, and industries will be transformed/new categories created.

Anecdotally, GPT4 has completely changed how I approach research and troubleshooting and has been saving me many hours on a regular basis. Weird error message in a linux log file? I'll provide some context and paste the error, and a few minutes later the issue is solved based on a decently good answer.

Picking up a new language? I'll ask for example code to solve a very specific use case, and then keep returning when I run into errors. The rate of learning enabled this way is pretty remarkable.

If the only thing that we get out of this craze are some extremely high quality LLMs with an accurate understanding of most human knowledge, that alone is an incredible jump forward.


Exactly. The ai car ethics news was always marketing to distract from the fact that it doesn’t work.


As someone who works "close to the metal" in AI, do you really believe AI is just another "bubble" the way crypto was? I'm having a tough time finding the two even similar. It's clear how much potential it has to change things far more than even the most optimistic views of what crypto bros were touting as the future a few years ago.


It's worth noting that Nadia Asparouhova is married to Delian Asparouhov ( https://foundersfund.com/team/delian-asparouhov/ ), a partner at Peter Thiel's Founder's Fund.

She previously wrote https://nadia.xyz/climate-tribes last year and there were critiques of her essay such as https://twitter.com/cognazor/status/1607509056499101696


Any article about e/acc that does not mention cryptocurrency is missing something fundamental.

It smells funny, because it is. People in e/acc claim to be "building" something fundamental that will change the world, change the future, etc. They are the heroes of the story.

But if you are actually building something, you are silent. Why? Because you're BUSY! Just like cryptocurrency: if crypto is going to revolutionize payments, and make things seamless and interoperable, and, and and... then why do you have to spend hours a day blabbering on about it?

The answer, of course, is because talking is easier than building, and with talking, it's all about you. Talking about cryptocurrency and then getting rich for talking is a lot easier than actually building it to make it useful. Likewise with this e/acc stuff. Most of the LLM startups are basically worthless appendages on the original tech. A pump and dump for VC money that just wants to be part of the next thing. Just like sh&tcoins were a worthless appendage on the original tech, and were similarly pumped.

I'll give them this, LLM are a lot more useful than crypto.

What's kind of sad about it all is that I actually think a proactive movement to build is a good idea. Let's build! How about with light rail, walkable buildings, bike lanes -- yknow, the things that actually improve people's lives? instead of "quantum qubit agentic neurographic models" or whatever else is being shilled?


It's chasing a high that no longer exists in the current epoch and labeling it a virtue.

The party is over. The cheap money is gone (it was wasted on a lot of bad ideas and bad implementations of good ideas).

The DJ has left. The coke guy is gone. And now you have to pay the carpet guy to get the stains out.


My attitude about safety of AI is this: if AI is an existential risk dangerous enough to justify draconian measures, we're ultimately fucked, since those measures would have to be perfect for all future times and places humans exist. Not a single lapse could be allowed.

And it's just not plausible humanity could be that thorough. So, we might as well assume AI is not going to be that dangerous and move ahead.


Your first paragraph is exactly how I feel about nuclear weapons, to put it into context. I don’t think the logical conclusion from that viewpoint is that nuclear weapons aren’t that dangerous so we should just move ahead.


The difference is that it is somewhat feasible to control access to the materials necessary to produce nuclear weapons.

It is not remotely feasible to control access to computing devices.


I don't think nuclear weapons are the kind of existential risk that AI doomsters imagine for AI.


Other than those that have called for nuking AI datacenters.


I feel obligated to point out that nobody has argued for nuking datacenters; the most radical AI existential-safety advocates have argued for is "have a ban on advanced training programs, enforced with escalating measures from economic sanctions and embargos to, yes, war and bombing datacenters". Not that anybody is optimistic on that idea working.


That presumably demonstrates they think nuclear war is less dangerous than AI.


I think it has been empirically demonstrated that lapses in regards to the control and use of nuclear weapons can occur without the destruction of humanity.

(I am not an AI doomer, nor do I feel that nuclear weapons are not dangerous/should be less controlled)


I think that's the same kind of attitude that makes a lot of people not take global warming seriously.

It's a way to process ideas you don't want to be true, sure, but it's not a sensible or cost-effective way to deal with potential threats.

(And yeah, you can argue AI x-risk isn't a potential threat because it's not real or whatever. That's entirely orthogonal to the "if it's true we're fucked so don't bother" line of argument.)


Except AI risk doesn't even have plausible models of the danger. It has speculative models: "AI would be able to hack anything" is about as far as the thinking seems to go. It's not grounded in analysis of processing power or capabilities, or rates of exploit discovery in software - or psychological testing of users to determine susceptibility to manipulation.

Global warming on the other hand is heavily grounded in model building - we're building models all the time, taking measurements, hypothesizing and then testing our models with reference to future data recovery. We have simulations of the effect of increased energy availability for specific climate systems, we have estimations of model accuracy over different length scales.

Where is this sort of analysis for AI safety? (if it exists I'm genuinely interested, but I just don't see it).


Have you ever tried to make a prediction model with more than 1 parameter?


I guess but its similar how diplomqcy and international agreements need to be perfect forever to prevent nuclear war but so far it has worked and its worth it to keep trying.


Good AI's may outcompete bad AI's, so that it's only a matter of making sure the good one arrives first.


That's the solve. We'll never succeed at stopping progress, so the best we can do is make sure the ones who get it first are the "good guys". It's the same as nuclear weapons.


At that point, there's nothing left. Will you fight, or will you perish like a dog?

However, you don't have to work on the nastiest parts of the probability space, as no one has a perfect world model, and improving other scenarios should reduce the overall extinction probability estimate.


I wish they would have explained what the original Accelerationism actually means instead of just giving it a quick nod:

>“Accelerationism is unfortunately now just a buzzword,” sighed political scientist Samo Burja, referring to a related concept popularized around 2017.

Accelerationism is a political reaction that basically works like this: when faced with any kind of dilemma, choose the option that is the most progressively destructive. The idea is that society & institutions are so tainted, that the only way to fix it is to continue knocking down all those boundaries and Chesterton's Fence's in order to effect some kind of institutional collapse that's being headed toward anyway.

Why slowly implement policies that spell our doom when we can implement them fast - accelerate!

That's the idea. And they should have mentioned that in the article, because the contrast with the Globo Tech notion of "Effective Accelerationism" is just good irony.


The investor class makes the most money when there is a new growth area. Ergo, getting more of new growth areas is in their interest, and it makes sense to be putting new echoes into the chamber, maybe some will escape to the wider world. Perhaps AI will be as impactful as splitting the atom, who can tell.


This other thread [0] "Easy to criticise, hard to create" and my remark here [1] about the demise of market research have something in common with this topic.

It's about the gravity of the mob. Risk taking in business (which is now being called "acceleration" AFAICS) is about the courage not to think about "what everybody wants". I observe it's really hard for the SV tech mindset to escape that gravity. Downvote me all you like for saying such uncomfortable things.

[0] https://news.ycombinator.com/item?id=39346374

[1] https://news.ycombinator.com/context?id=39344991


"Finding what the market wants and providing it" only worked when the market had wants, and wants are finite. Once those are all taken care of, everything past that is engineered desire, which is where consumer culture comes into play.

You don't need to explain to someone why they want food. Of course they want food, they're hungry. You do need to explain to them why they want a pizza covered in gold leaf. You don't need to explain why they want a car: they want to get around in the United States, and a car is more or less mandatory: you need to explain why they want a $100,000 SUV that gets worse fuel economy than a comparable van while holding less cargo and has such terrible visibility there's a non-insignificant chance they will run over and kill one of their own children with the thing. You don't need to convince them they want a smart phone, you need to convince them they want a new smart phone that's 11% faster than the old one even though their current one works fine.

Tons and tons of business, not even remotely isolated to the tech sector, has nothing at all to do with meeting consumer demand or fulfilling wants in the market; it has to do with building slightly different versions of products that already exist, and then spending millions if not billions of dollars so you can scream at the market's ear as loud as possible until they think that voice is coming from inside their own heads, and they'll buy it to make it shut up. And then repeat.


If you ask people what they want, they're going to tell you they want a slightly better version of whatever they already have. That's if you're lucky. If you're not, they're going to tell you they want whatever they think everybody else wants or they want what they think they're supposed to want or they want what you want them to want.

If you're going to innovate, you ultimately have to go out on a limb and predict what people will want. That means high risk/high reward products not the kind of consistent reliable profits that Wall Street wants.


e/acc is just the logical +1 follow-up to hodl.

a tech subculture: creates its own acronym, posts a lot of self-referential coded messages, add exclamation marks, ignore those who decry it as annoying. no whining no complaining no critique. helps to get a tech alpha to engage/retweet you. tolerate a little discourse but mostly exclaim. make sure everyone knows you think X rulez. pro$it.


The whole AI risk vs accelerationism debate strikes me as an intra-bubble squabble between two groups of people who want to define an overton window about the tech industry that focuses on the tech part

Personally, I don't think the widespread negative turn public opinion of the tech industry has taken has all that much to do with the tech. It's way more about the industry part. The business models of tech are defined by our current economic moment, one of rising inequality and governments that are increasingly complicit in supporting profit over human well-being. I don't personally think technology can be inherently good or bad, but what we do with it matters, and what we're doing right now is, in aggregate, reinstating a feudal society wherein the whims of powerful rentiers increasingly screw with people's lives. As far as I can tell EA and e/acc are just a distraction from this reality


I've been in e/acc since the beginning, so I'll add a bit of context of how I interpret the ideology.

The core belief of e/acc is that we can solve our problems not by austerity, but by growth. Instead of limiting births and limiting technological progress, we can instead find solutions to support more people and push humanity beyond its boundaries. Historically, it's the mindset that worked the best for countries, but it does have a limit.

Recently it got very culty and I since distanced myself from it. I don't think now is a good time to judge it since many people have their own understanding of it. You can just slap (e/acc) on your Twitter username and somehow become a voice for the "movement".

I don't think people should take it so seriously, it's just a way of thinking and everyone has their version of it. Just like effective altruism, people will use the label for their own agenda.


"Bezos admits that his initial announcement was a bit of a 'shitpost.'"

Honestly how I feel about anyone mentioning "accelerationism." It's not an ideology or movement, it's just a way to act sophisticated about trolling online.


> If it becomes paralyzed by a fear of the future, it will never produce meaningful benefits

This kind of straw-manning seems like the main game of e/acc types. Bin all criticism as gloom and fear, rather than actually engaging with the substance of it. Nobody (or at least virtually nobody) who criticizes SV culture is "afraid of the future", they're pissed off at a couple decades of shitty management grown fat, greedy, and stupid off ZIRP dynamics. Rather than face the obvious failures of the industry and its leaders, they'd rather pretend it's the kids who are wrong and blithely continue stepping on rakes.


In political circles, "accelerationists" believe that the demise of the US is inevitable, and that the faster it happens the better the world will be.

I see similar parallels to the tech sector. Is the death of SV inevitable? If so, is the world better off if it dies quickly or slowly?

Kind of a tangent to the core article, but it seems to me that we are in a large-scale transition of society in many ways, and the redemocratization of technology is a critical aspect of a bright future..


That's a different type of accelerationism.


Can we just stop with these absolute positions already?

You can justify horrible atrocities if you strongly believe in extreme outcomes.

Effective altruism, accelerationism, doomers... All of these mindsets are toxic for the same reason that extreme religious stances are toxic. Believing that the world is going to end or that it's going to reach salvation or whatever extreme outcome you feel like, and then using that to justify short-term atrocities is not the way.

Stop putting infinity in your forecasts. It breaks everything. We have been down this path before many times and it never ends well.


> You can justify horrible atrocities if you strongly believe in extreme outcomes.

Yeah.. that’s the goal.


If you're writing a nostalgic tech "thinkpiece" please be specific about what you're nostalgic for.


There's a combination of exploitation and lack of humility that marks recent tech. On the one hand, a lot of newer tech treats you like some kind of tech peasant: no sideloading, endless dialogs that have no option to say "no", arbitration clauses, no ability to fix your own devices, little or no customer support, etc. On the other hand you have proclamations about what the future will look like that turn out to be terribly wrong: blockchain, metaverse, VR, self-driving truck convoys, etc.

After such an atrocious track record of tech leaders being completely wrong about what "the next big thing" is, why should anyone trust these people and why are they so certain they're "accelerating" in the right direction?

Edit: I miss the era when tech companies treated their users as customers, not as marks to be manipulated for the purpose of profit maximization.


50 year cycle? All of the recent breakthroughs seem to be happening on every shortening cycles.


The author lacks any context for what accelerationism is.

The original idea comes from Marxists advocating for the adoption of free market capitalism as a means of bringing about the alienation of the working class, which is a necessary precondition for socialist revolution.

The idea of effective accelerationism (which was a joke making fun of people like Musk and SBF) is the rapid, uncontrolled promotion of AI as a means of destroying the entire tech industry and all that surrounds it.


If this is your take of e/acc, you're wildly misunderstanding it.

e/acc is about building fast and making changes that benefit humanity, while pushing useless bureaucracy away. Elon Musk himself is a proponent of e/acc. It has nothing to do with AI specifically, and especially nothing to do with destruction.

The core belief of e/acc is that we can solve our problems not by austerity, but by growth. Instead of limiting births and limiting technological progress, we can instead find solutions to support more people and push humanity beyond its boundaries.


You're just describing normal techno-optimism. E/acc, as in the Beff Jezos shitposting, is openly and intensely anti-human. From the man himself:

> Effective accelerationism (e/acc) in a nutshell:

> Stop fighting the thermodynamic will of the universe

> You cannot stop the acceleration

> You might as well embrace it

> A C C E L E R A T E

https://beff.substack.com/p/notes-on-eacc-principles-and-ten...

where "the thermodynamic will of the universe" means endless darwinian struggle.


I don't see what is anti-human in these citations. Your interpretation of it doesn't make it fact.


You think a human is the thermodynamic optimum? Everything not optimized for gets optimized out.

"This is just cope. What you’re describing is, is that reality is hard. Yes. If the thing we want is complicated and hard to get, the answer is not to pick something simple and easy and give ourselves a participation award. The answer is, well, we have to get stronger. We have to get better." -- Connor Leahy


It's a joke that you aren't in on.


Truly a "first as tragedy.." kind of situation here. The kind of hyperstitional violence of taking "accelerationism" and turning it into a tech-bro feel good huddle is almost too on the nose. Nick Land is now just one more CEO promising the future. This is, at once, exactly what the CCRU warned us about and brought about. The lemurians are in full force, the time is nigh!

http://www.ccru.net/index.htm


EA here:

"Discussing the risks and opportunities in front of us intelligently, e/accs believe, is a sign of a flourishing civil society"

Except e/acc has made a massive contribution to lowering the standard of discourse. Beff talked intelligently on Lex and they are much more reasonable on Twitter Spaces, but on the Twitter timeline itself 90% of their posts are some combination of trash/propaganda/insults.

"Rather, their moral vision is one where more people — including and especially those who consider themselves hands-off today — actively engage with emerging technology and identify concrete plans for its development and stewardship, rather than reflexively backing away from what they don’t understand"

Again, e/acc seems to be all about "build, build, build!" which stands in stark contrast to taking a step back and thinking carefully about the impacts of what you're doing before you do it.

"Discussing the risks and opportunities in front of us intelligently, e/accs believe, is a sign of a flourishing civil society."

Again, this isn't accurate. E/acc is very much not about balance and also very much not about about discussing risks intelligently vs. almost always criticising the people making the claims instead of engaging in discussion on a technical level.

...

Criticism aside, this article paints a picture of e/acc which, while not representative of the movement as it exists, is something which it could choose to grow into.


The problem is that from 1980 to 2000, and even 2000 to 2010, there was a reliable "upgrading" on a regular basis of the average person's quality of life due to tech. Since then that upgrading has halted, and whatever benefit marginal improvements in consumer tech has bequeathed most people has been overshadowed by inflationary pressures on housing, cost of living, etc. which make it seem things are only getting worse. This draws anyone to one of two conclusions:

1. Things are just going to keep getting worse or stay the same. Preserve what we can, don't be a bad person, etc

2. It's up to us to make things better, our dreams a reality, etc

2 is a much more attractive general ideology for young people, and anyone not well situated financially.


as a contrast to "accelerate" let's consider "addiction" cycles. Business-driven tech seeks addiction cycles in consumers due to orders of magnitude larger results.


the only thing they've accelerated is the posting


Are we talking about the same accelerationism that Nick Land professed?


Yes and no: e/acc twitter is fundamentally unserious and so has no problem simultaneously endorsing both full-blown "capital is an awakening alien god that will kill us all (and that's a good thing)" Landianism and conventional "AI will help us cure cancer" techno-optimism. What they really believe, if anything, is anyone's guess.


Am I the only one that associates the term "accelerationism" mostly with right-wing extremism and terrorism? https://en.wikipedia.org/wiki/Accelerationism#Far-right_acce...

If you are a techno-optimist, you should ask your AI to come up with something less associated with race war and neo-nazi ideology


I mostly associate the term with the original left-wing version of accelerationism (which is a little further up in the Wikipedia article you linked to). In both versions, the view is essentially that the society we have now is so bad as to be practically unredeemable, so the next best course of action is to accelerate this society's downfall so the new "good" society can be built instead. Obviously, the vision of the "good" thing that comes next is, uh, wildly different.

In either case, seems very different than what proponents of e/acc are about, but in a cynical sense, not totally unrelated. The Wikipedia does include, "It has been regarded as an ideological spectrum divided into mutually contradictory left-wing and right-wing variants, both of which support the indefinite intensification of capitalism and its structures as well as the conditions for a technological singularity, a hypothetical point in time where technological growth becomes uncontrollable and irreversible."


e/acc seems very similar in premise to the original left wing accelerationism, but with the very important caveat that its proponents trust implicitly that whatever the current capital market decides is worthy of funding will somehow be the appropriate technology to accelerate.


e/acc is firmly based on far right wing accelerationism of Nick Land (a blatant racist https://twitter.com/Outsideness/status/1707443031803363432 ). e/acc founder has cited him as has Marc Andreesen in his "manifesto."


You are wrongly being downvoted, because enough people did not bother to click your link. Accelerationists are indeed extremists who think that burning down the state opens up possibilities for them as winners. That is the core.

You will have to be either a spoiled and bored (tech) billionaire to support it, or a clueless techy.


Accelerationism is just rebranded Futurism, which was an early incarnation of the 1920s Italian Fascist movement. If Marc Andreessen is on board, that's a dead giveaway that it's shit.


Was this article ai generated?


I'm tired, boss


Beff is a late-stage capitalism incarnation of the useful idiot, sucking off the landed technogentry in hopes of securing a life raft in the economic apocalypse e/acc is attempting to usher in. When 89% of US stock equity is owned by the top 10%, you know that e/acc is a sclerotic and thinly veiled “fuck the poors” ideology being perpetrated by precisely that well-insulated 10%, who have little to nothing to fear from a world where labor value goes asymptotically towards zero.


> When 89% of US stock equity is owned by the top 10%, you know that e/acc is a sclerotic and thinly veiled “fuck the poors” ideology

You forgot to control for "older people own more stocks because they spent more time saving for retirement".



wat..

do they know in the non IT realm, accelartionism refers to far right nazi groups believing the only way to bring back far right ideals is to accelerate conflicts with mainstream politics?

... and it's generally working, to the detriment of stability.


I agree with most techno-optimism, but e/acc is a pretty braindead version of it unfortunately.

Nadia's article here seems like a much more reasonable version of this philosophy that embraces political solutions instead of naive pure tech ones.

By contrast, "Beff Jezos" (Guillaume Verdon) says he wants to be a "cultural engineer" [1, Lex podcast], but when recently challenged by Lex Friedman and Connor Leahy [2], he comes across like a dorm room libertarian who has an immature idea of what he's saying, not someone with something new to say, which I think is at least partially him trying to maintain a kayfabe [3] for the e/acc community (like a nichy techno Trump) and partially him actually being a true believer.

My critique [4] of e/acc, Jezos, Andreessen's thread is essentially that these philosophy-esque ideas are actually (unlike what they believe) not very ambitious in that they in practice advocate for: "don't touch the system, it works for me", and never mention (or seemingly talk about) how this tech can actually have massive impacts for regular people, or how we can use the gov't and broad prosperity and access to tech (and tech-enabled hyper-democracies) to further accelerate the best version of our vision for the future.

These guys are like the Underpants Gnomes of futurism:

Step 1: Build superhuman technology without any safeguards.

Step 2: ???

Step 3: Profit for me.

I at least expect "Profit for everyone" at step 3, but taking a try at Step 2 is what really would impress me.

[1] Beff on Lex: https://www.youtube.com/watch?v=8fEEbKJoNbU

[2] Beff v. Connor Leahy https://www.youtube.com/watch?v=0zxi0xSBOaQ

[3] https://en.wikipedia.org/wiki/Kayfabe

[4] https://twitter.com/NickPinkston/status/1714088788237180947


It's a bit funny, a bit annoying, and a bit scary how these e/acc people don't seem to understand the slightest thing about Nick Land's philosophy, which is ultimately where the term accelerationism came from. Probably because it's extremely dense, filled with "Deleuzoguattarian schizoanalysis", and because Land himself is hard to follow or understand. He gets described as a neoreactionary, which is pretty accurate to my reading, but...it definitely has little to do with, quoting from the article: “Do something hard. Do it for everyone who comes next. That’s it. Existence will take care of the rest.”

I am by no means an expert or huge fan of Land, but he's definitely more along the lines of, "the machines are going to eat everything and there's basically nothing you can do about it."

So, in some sense, this is just another round of "economic elites co-opting someone else's culture."

Edit: I did some searching on Twitter and found this great quote, which really does sum it up:

> E/ACC is a fitting end to Accelerationism. After having been passed around by dissident intellectuals and online deviants it can finally settle into retirement as another kitschy pastel MS Powerpoint Californian grindset aesthetic stripped of all its substantial insights.

https://twitter.com/augureust/status/1691893969678913692


Actually, the e/acc community has stated it rejects Nick Land's more recent work.

If you read older posts, you'll find the earlier e/acc members got started with Fanged Noumena.


Interesting. I don’t doubt that the earliest online people were familiar with it, but I am extremely doubtful that the big names dropped in the article have even heard of Land.


>The outcome of the U.S. presidential election not only shocked legislators, who set about searching for answers from tech companies they believed were partly to blame for Donald Trump’s victory; it also marked the turnover of the Obama administration, which had enthusiastically supported tech’s optimism.

The fact that blame is required for the result of an election is kind of mind blowing. It's a throw away sentence yes, but it's also a very nice summation of the attitudes the Wests mandarins.

Trumps first victory happened because Obama threw the working class under the housing repossession bus. Trumps second victory will be because Biden threw the working class under the inflation bus.

Blaming tech is a very easy way of those people not having to look at the mirror and facing the desolation they have created.


Accelerationism is the best way to go at it since it ensures either a quick death or a fix and not the ugly in-between we're all starting to slowly catch a glimpse of


So in that view, accelerationism is about placing a high-stakes, binary bet that such actions will result in a utopia rather than utter destruction.

But what gives those people the right to gamble with the lives of the rest of us in that way? The entire line of thinking is, in my view, not only horribly egotistic and authoritarian, but antagonistically so.


Oh yes it is, but apparently that's what we, as a specie, want for now


Have you looked at recent opinion polls? e/acc is polling at -51 and various "AI doom" positions are polling strong into the positive percentages.


I don't think that we, as a specie, want this at all. I think a subset of powerful people want it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: