Hacker Newsnew | past | comments | ask | show | jobs | submit | iamthepieman's commentslogin

My brother passed away from AML (acute myeloid leukemia) almost two years ago. His quality of life was dismal on treatment, he was constantly vomiting, mouth sores, unable to sleep but very tired, couldn't see his two young children and locked away in a hospital ward. His wife had to make a huge effort to see him consistently because she couldn't bring the kids and had to find babysitters. He made the decision to stop treatment because of that. His chances were low anyways and he pursued 'alternatives' because it was better than nothing. Even if the main benefit was to make him and his family feel like he wasn't completely giving up.


Your situation is different from the one in the article. In your brother's case, it was end stage and so forgoing treatment to improve QoL makes total sense. In the article the sister actually had a high chance of survival (so opposite of end stage) but still chose not to undergo treatment.


The worst part is that a lot of the alfalfa and other feedstock crops are shipped out of the country.


The film "Never Let Me Go"[0] is kind of about that.

[0] https://en.wikipedia.org/wiki/Never_Let_Me_Go_(2010_film)


Framing the whole thing around costs is the problem. Frame it around better quality, better service, buy it for life etc.

There's no way I'm going to buy 3 times the cost for the exact same experience. I don't care about free returns I want something that has no returns because it's reliable and better engineered and made. I'd much rather have 2 widgets that I never have to worry about again (or they can be repairable that's fine too) than 10 widgets, two of which I need to find a box to return them, one of which has intermittent problems that don't quite make it worth it to return OR use, one of which was cheap enough I bought on a whim but was never really going to use anyways etc.

Actually more expensive, fewer, better, things sounds great now that I've written this out. Less mental and physical clutter.

But of course most people don't see it that way and business have to earn trust around these alternative ways of thinking about our relationships with our "stuff". Slapping a "Made in the U.S." sticker on something is gonna do nothing though.


Creating stealth group in a huge Fortune 500 company with the blessing of my immediate boss but no other higher-ups. Trying to productize critical consulting tool sets in the utility industry so we can stop repeating ourselves for the 100th consulting engagement.

Yes, customer is a special snowflake but they still need 90% or whatever every other client in this industry needs.

Feeling increasingly like this is a fools errand.

Even though we've proved this out with tool sets strung together with duct tape and safety pins, and are therefore the most profitable group within our department, we still need to be 100% billable.

It's only because we're the most profitable group that we can pretend we're all billable while I work with two other people to bootstrap this crazy project

Edit: anyone hiring? Just found out my boss is quitting.


Oof. This post started so good and then got progressively more sad until the edit nailed it home. I hope your story continues and works out as a huge win, either as a new, good boss, you getting to openly lead this kind of thing, someone reading this and poaching/sponsoring you, or maybe even you working on this under your own name.

Good luck and we're rooting for you!


Thank you for the enthusiasm!

It was not intentional but my post really does read like a little story vignette that ends with a gut punch.

Not looking for sympathy so much as fellow appreciators of irony and schadenfreude but here's another kicker.

I pitched this idea to my previous company and was told there was no appetite for it. Just saw on my old company's blog that they released a "digital transformation in a box" program for mid-market clients in this space which is 90% of what I pitched to them. Bad and hilarious timing all around.


This is EXACTLY what pushed me over the line to quit my last job. Had a big pitch for a spin off I wanted to run, was told not only no appetite for it but is a stupid idea in a dying industry. Literally 2 weeks later it was in the board deck as something the company is going up build.

People are weird!


Sometimes you sow a little seed here and there, and sometimes it takes.


Good lord, how many times are you going to get punched in the gut today?


This sounds very much like an application begging to be done as a stand-alone company supporting these F500 companies. Could be very profitable as the basis for a service-provider model while you gain enough knowledge to product-ize and package it on basically customer-funded development. It seems your company kicking you in the gut is showing you the direction

Good Luck!


Very interesting. Maybe we can chat and explore? DM me on my X.com - it's on my HN about page - copy-paste the HN link here for context :)


Or find a way that works for you, gets you jobs and keeps you from breaking your moral framework.

I compare it to driving in traffic. A lot of times I'm not in a hurry and can just stay in one lane and crawl along. Other times, I am in a hurry and I can weave in and out, getting flustered and angry and nearly crashing and still end up 4 cars ahead of where I'd have been without all that.


Exactly this. You can either work really hard and likely get minimal benefit and cause yourself a lot of pain, or you can work considerably less hard, and largely end up in the same place.

Rarely, you can do all that extra work and get meaningful improvement that justified all the effort. It does happen. Sometimes it presents itself in the form of a severance package.


Depending on where you live, you get it in your water and you don't need both sources of fluoride.


I don't know anyone who drinks tap water, so that's irrelevant.


>XML has always seemed to be a data standard which is intended to be what computers prefer, not people

Interesting take, but I'm always a little hesitant to accept any anthropomorphizing of computer systems.

Isn't it always about what we can reason and extrapolate about what the computer is doing? Obviously computers have no preference so it seems like you're really saying

"XML is a poor abstraction for what it's trying to accomplish" or something like that.

Before jQuery, chrome, and web 2.0, I was building xslt driven web pages that transformed XML in an early nosql doc store into html and it worked quite beautifully and allowed us to skip a lot of schema work that we definitely were ready or knowledgeable enough to do.

EDIT: It was the perfect abstraction and tool for that job. However the application was very niche and I've never found a person or team who did anything similar (and never had the opportunity to do anything similar myself again)


I did this for many years at a couple different companies. As you said it worked very well especially at the time (early 2000’s). It was a great way to separate application logic from presentation logic especially for anything web based. Seems like a trivial idea now but at the time I loved it.

In fact the RSS reader I built still uses XSLT to transform the output to HTML as it’s just the easiest way to do so (and can now be done directly in the browser).


Re xslt based web applications - a team at my employer did the same circa 2004. It worked beautifully except for one issue: inefficiency. The qps that the app could serve was laughable because each page request went through the xslt engine more than once. No amount of tuning could fix this design flaw, and the project was killed.

Names withheld to protect the guilty. :)


Most every request goes through xslt in our team's corporate app. The other app teams are jealous of our performance.


Well made formal clothes are the most comfortable you'll wear. There is some mental discomfort for a while depending on your mindset because, at least for me, I was stuck thinking fancier clothes required fancier demeanor and attitude? Not sure I'm bailing the feeling here but it does go away eventually. I ultimately did drop the everyday formal wear though because well made stuff was expensive and Im pretty active and found myself with a lot of rips and stains on expensive clothing items.


This story happened in my backyard. The shootout was about 40 minutes from me but Youngblut and Felix Bauckholt were reported by a hotel clerk dressed in tactical gear and sporting firearms in a hotel a few blocks from me.

Weird to see a community I followed show up so close to home and negatively like this. I always just read LW and appreciated some of the fundamentals that this group seems to have ignored. Stuff like rationality has to objectively make your life and the world better or its a failed ideology.

Edit: I've been following this story for over a week because it was local news. Why is this showing up here on HN now?


> Weird to see a community I followed show up so close to home and negatively like this.

I had some coworkers who were really into LessWrong and rationality. I thought it was fun to read some of the selected writings they would share, but I always felt that online rationalist communities collected a lot of people with reactionary, fascist, misogynistic, and far-right tendencies. There’s a heavily sanitized version of rationality and EA that gets presented online with only the highlights, but there’s a lot more out there in the fringes that is really weird.

For example, many know about Roko’s Basilisk as a thought exercise and much has been written about it, but fewer know that Roko has been writing misogynistic rants on Twitter and claiming things like having women in the workforce is “very negative” for GDP.

The Slate Star Codex subreddit was a home for rationalists on Reddit, but they had so many problems with culture war topics that they banned discussion of them. The users forked off and created “The Motte” which is a bit of a cesspool dressed up with rationalist prose. Even the SlateStarCodex subreddit has become so toxic that I had to unsubscribe. Many of the posts and comments on women or dating were becoming indistinguishable from incel communities other than the rationalist prose style.

Even the real-world rationalist and EA communities aren’t immune, with several high profile sexual misconduct scandals making the news in recent years.

It’s a weird space. It felt like a fun internet philosophy community when my coworkers introduced it years ago, but the longer I’ve observed it the more I’ve realized it attracts and accepts a lot of people whose goals aren’t aligned with objectively “make the world better” as long as they can write their prose in the rationalist style. It’s been strange to observe.

Of course, at every turn people will argue that the bad actors are not true rationalists, but I’ve seen enough from these communities to know that they don’t really discriminate much until issues boil over into the news.


Sophistry is actually really really old:

>In the second half of the 5th century BCE, particularly in Athens, "sophist" came to denote a class of mostly itinerant intellectuals who taught courses in various subjects, speculated about the nature of language and culture, and employed rhetoric to achieve their purposes, generally to persuade or convince others. Nicholas Denyer observes that the Sophists "did ... have one important thing in common: whatever else they did or did not claim to know, they characteristically had a great understanding of what words would entertain or impress or persuade an audience."

The problem then, as of now, is sorting the wheat from the chaff. Rationalist spaces like /r/SSC, The Motte, et. al are just modern sophistry labs that like to think they're filled with the next Socrates when they're actually filled with endless Thrasymachi. Scott Alexander and Eleizer Yudkowsky have something meaningful (and deradicalizing) to say. Their third-degree followers? Not so much.


Yudkowsky texts represent my mental image of a vector continuously scanning a latent space in some general direction. Changes just pile on and on until you come from concept A to concept B without ever making a logical step, but there’s nothing to criticise cause every step was a seemingly random nuance. Start at some rare values in most dimensions, crank up the temperature and you get yourself Yudkowsky.

> our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted (…) The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.

I doubt that a guy who seriously produces this can say something meaningful at all.


Was anyone else ever able to construct a mathematical model of CEV?


I don't think Eleizer Yudkowsky has anything meaningful to say. He is a doomsday cult leader who happens to be fashionable in some circles.


While I won't claim he currently has much of interest to say, he definitely explained a lot of important ideas for thinking more clearly to people who would not otherwise have encountered them, even if he didn't invent any of them.


The community/offshoot I am part of is mostly liberal/left. My impression that lesswrong is also liberal/left.


> The community/offshoot I am part of is mostly liberal/left

There isn't an official "rationalist" community. Some consider LessWrong to be the center, but there have always been different communities and offshoots. As far as I know, a lot of the famous rationalist figures haven't participated much in LessWrong for a long time now.

The far right offshoot I was referring to is known as "TheMotte" or "The Motte". It was a gathering point for people who were upset after the Slate Star Codex comment section and subreddit banned "culture war" topics because they were becoming an optics problem.

It's easy to forget because it's a "don't talk about it" topic, but after culture war topics were banned from SSC, The Motte subreddit had significantly more activity than the SlateStarCodex subreddit. They eventually left Reddit because so many posts were getting removed for violating Reddit policies. Their weekly "culture war" threads would have thousands of comments and you'd find people "steelmanning" things like how Trump actually won the 2020 election or holocaust denial.

The other groups I was referring to were CFAR, MIRI, and Leverage, all of which have been involved with allegations of cult-like behavior, manipulation, and sexual abuse. Here's one of several articles on the topic, which links to others: https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experie...

Every time I discuss this on HN I get downvoted a lot. I think a lot of people identify as rationalists and/or had fun reading LessWrong or SSC back in the day, but don't understand all of the weirdness that exists around rationalist forums and the Bay Area rationalist community.


We don't have great language or models for talking about this stuff.

It's possible for there to be a person or group who are fine, but who attract or allow followers and fellow travellers who are not fine. Then it's possible for one person to look at that person or group and think they're fine, but for another person to look at them, see the followers, and think they're not fine. And sometimes it's hard to tell if the person or group is actually fine, or if they're really maybe actually not fine on the down low.

This problem is amplified when the person or group has some kind of free speech or open debate principle, and exists in a broader social space which does not. Then, all the dregs and weirdos who are excluded from other spaces end up concentrating in that bubble of freer speech. For example, there's an explicitly far left-wing message board that doesn't ban people for their politics; because all the more moderate (and moderate cosplaying as extreme) left-wing boards do, that board collects various libertarians, nationalists, groypers and whatnot who can't go anywhere else. Makes for an odd mix.


>there's an explicitly far left-wing message board that doesn't ban people for their politics; because all the more moderate (and moderate cosplaying as extreme) left-wing boards do, that board collects various libertarians, nationalists, groypers and whatnot who can't go anywhere else. Makes for an odd mix.

That actually sounds pretty fascinating. What board are you thinking of?

It's a common observation that any free speech place on the internet will disproportionately attract right-wing wackos. But arguably that says more about the left than the right. If your politics are sufficiently to the left, there are a lot of places on the internet that will cater really well to that, and delete/downvote non-conforming views pretty aggressively (thinking of reddit in particular). So arguably the bottleneck on having a true "free speech forum", where all perspectives are represented, is that people with left politics have more fun options, in the form of forums which are moderated aggressively to cater to their views.

I tried posting on TheMotte a few times, but I found it to basically be a right-wing circlejerk. It was actually a bit eye opening -- before that, part of me wondered whether the stifling conformity on reddit was intrinsic to its left politics. If a critical mass of left-wing posters had been present on TheMotte, I might have stuck around.

I think upvoting/downvoting systems really accelerate the tendency towards herd mentality. It becomes very obvious when you hold an unpopular minority view, and that's a strong motivator for people with minority views to leave.


> There isn't an official "rationalist" community.

Rationalists have always associated strongly with secular humanists and sceptics. There are multiple organisations that either include "rationalist" in their title or primary mission statement alongside sceptical secular humanism.


This is more of a reddit problem. They don't allow true discourse anymore and this is the consequence. It's harder to have a rational debate about a difficult topic while still courting advertisers. So it became an echo chamber and a bunch of people left. That's what you're describing.


Perhaps there are people in power who would benefit from portraying those communities in a different light.


I have no skin in the game. I was just around to witness this all go down.

The rationalist community I was talking about is well known. They split from SSC after the ban on culture war topics. They left Reddit a couple years ago because so many of their posts were getting flagged by Reddit for policy violations.

I'm not making this up. You can go see for yourself: https://old.reddit.com/r/TheMotte/comments/uaoyng/meta_like_...


What exactly do you mean?


Making a rationalist community seem right-wing would

- make right-wingers feel vindication as if they were rational all along,

- cause suspicion and division within the left/liberal rationalist community, and

- make the community, its people and their ideas less palatable to the general public.


Curious: Do you think J.D. Vance unintentionally dog-whistling a Scott Alexander article when he's on the Joe Rogan podcast was orchestrated or just what happened?[0]

---

[0] https://www.reddit.com/r/slatestarcodex/comments/1ggl0mh/jd_...


What is "unintentionally dog-whistling" supposed to mean?

Isn't intention the essence of the concept of the "dog whistle"?

I'd also like to observe that Scott Alexander does a yearly survey, which provide unusually robust evidence that if we're going to impute to him the cultural affiliations of those who read his posts, his politics can only be "all of it".


The dog whistle is a message hidden to those not in the know. An unintentional dog whistle is a hidden message shared by mistake. Either not intended and so a false message, or not intended but accidentally shared.


I think astral codex ten did a survey recently and the majority of respondents were politically left


What kind of political leftist would you say resonates most acutely with Scott Alexander's January paean to the scientific racism of Emil Kirkegaard and Richard Lynn ("How To Stop Worrying And Learn To Love Lynn's National IQ Estimates")?


The ones who want to confront reality even when it has unpleasant truths rather than believing comfortable lies.


"Elites are making it taboo to talk about intelligence in order to preserve their position at the top of the social hierarchy. They're doing genetic engineering for their kids in secret, while making the topic of intelligence radioactive so the masses can't follow suit. If we reduce the taboo around intelligence, we can decrease global inequality, by improving access to maternal interventions for IQ in the developing world."

(Granted, that's not a common leftist position. But maybe it should be.)


That is an inaccurate, reductionist comment. I encourage folks to read that post if they are interested.


Having not-yet read the post, I appreciate your very practical response. “Read the thing you’re criticizing” is great advice.


It's somewhat odd to represent a community as being right wing when the worst thing to come from it was a trans vegan murder cult. Most "rationalists" vote Democrat, and if the franchise were limited to them, Harris would have won in a 50 state landslide.

The complaint here seems to be that rationalists don't take progressive pieties as axiomatic.


"trans vegan murder cult" is the best band name ever


> It's somewhat odd to represent a community as being right wing when the worst thing to come from it was a trans vegan murder cult

I was referring to "The Motte", which emerged after the SlateStarCodex subreddit finally banned "culture war" topics. Scott announced it in this post: https://slatestarcodex.com/2019/02/22/rip-culture-war-thread...

The Ziz cult did not emerge from The Motte. I don't know why you came to that conclusion.

> Most "rationalists" vote Democrat,

Scott Alexander (of SlateStarCodex) did surveys of his audience. Interestingly, the culture war thread participants were split almost 50:50 between those identifying as left-wing and those identifying as right-wing.

Following the ban on discussion of culture war topics, many of the right-wing participants left for The Motte, which encouraged these conversations.

That's how there came to be a right-wing offshoot of the rationalist community.

The history is all out there. I'm surprised how many people are doubting me about this. You can read the origin story right on Scott's blog, and the Reddit post where they discuss their problems with running afoul of Reddit's content policies (necessitating a move off-platform) is still accessible: https://old.reddit.com/r/TheMotte/comments/uaoyng/meta_like_...

> The complaint here seems to be that rationalists don't take progressive pieties as axiomatic.

No, you're putting words in my mouth. I'm not complaining about a refusal to "progressive pieties as axiomatic". I'm relaying history of rationalist communities. It's surprising to see all of the denial about the topic.


Being a trans vegan doesn't automatically make you left wing. Nor does voting Democrat. Being progressive is a complex set of ideals, just as conservatism is a lot more than whatever the Republican party is doing today.


>The complaint here seems to be that rationalists don't take progressive pieties as axiomatic.

A trans vegan gang murdering police officers is what's come out of this milieu.

I don't see how anyone can say they aren't taking "progressive pieties as axiomatic".

The OP is just taking the "everything I don't like is fascist" trope to it's natural conclusion. Up next: Stalin actually a Nazi.


> The OP is just taking the "everything I don't like is fascist" trope to it's natural conclusion.

Historically, good 90% of times I have seen what you say, the person or group in question turned out to actually be fascists later on. They just packed their fascism to nicer words at the time of the accusation. It kind of happened that those saying "everything I don't like is fascist" either a.) assumed the claim can not be true without bothering to think about what they read or b.) actually liked fascist arguments and not wanted to have them called what they are.

There is long history of "no one is fascist until they actually nazi salute and literally pay extremists" and "no one is sexist even after they literally stated their opinions on female inferiority again and again" and "no one is racist even as they literally just said that".


It's very worrying about society that people only think Elon is a Nazi because he did the Nazi salute, when everyone was saying he was well before then. What if someone is a Nazi and is smart enough to never do a salute? We might put them in charge of the country?


> The OP is just taking the "everything I don't like is fascist" trope to it's natural conclusion.

The right-wing rationalist community (The Motte) arose when Slate Star Codex finally banned culture war topics. Scott Alexander wrote about it: https://slatestarcodex.com/2019/02/22/rip-culture-war-thread...

There's also a long history of neoreactionary factions of the rationalist community as well as a fascination with fascist ideals from the likes of Curtis Yarvin.

There's some major retconning going on in this thread where people try to write all of this out of the history of rationalist communities. Either that or people weren't aware, but are resistant to the notion that it could have happened.


Yarvin is authoritarian but very far from fascism, arguably farther than Stalin was.


>The OP is just taking the "everything I don't like is fascist" trope to it's natural conclusion. Up next: Stalin actually a Nazi.

That's terminologically wrong, yet practically sensible conclusion. Some European countries in fact ban both communist and nazi ideologies and public display of their symbols as their authoritarian and genocidal tendencies are incompatible with democratic principles in said countries constitution.


Countries like Hungary.

Not a place I think anyone should try and emulate.


This is what arguing in bad faith looks like.

The list of European countries that ban Nazi symbols includes Austria, the Czech Republic, France, Germany, Hungary, Poland, Romania, Italy, Latvia, Lithuania, Luxembourg, Slovakia, and Sweden.

When people talk about EU countries banning of Nazi symbols, they are always referring primarily to Germany. It is the archetypical example of "countries that ban Nazi symbols".

If you want to focus on one country from that list, which is a valid thing to do, you either need to pick the archetype, or acknowledge it and then say why you're focusing on another example from the list instead.

If instead, you immediately pick the one example from that list that suits your narrative, while not acknowledging that every single other example doesn't suit your narrative, that is a bad faith argument.

In any case, recent politics aside, Hungary is an amazing country. I'm not sure about emigrating there, but I definitely recommend visiting.


The point was about banning both soviet and nazi symbols as equally evil


Goulash, tokay wine, vizsla dogs, Franz Liszt. A terrible people.

Edit: Ervin Laszlo, Zsa Zsa Gabor, Peter Lorre, Harry Houdini, a wide selection of pastries. :) Also Viktor Orban, but what can you do.


I have had some rather… negative vibes, for lack of a better term, from some of the American bits I've encountered online; but for what it's worth, I've not seen what you described in the German community.

There is, ironically, no escape from two facts that was well advertised at the start: (1) the easiest person for anyone to fool is themselves, and (2) politics is the mind-killer.

With no outwardly visible irony, there's a rationalist politics podcast called "the mind killer": https://podcasts.apple.com/de/podcast/the-mind-killer/id1507...

Saying this as someone who read HPMOR and AI to Zombies and used to listen to The Bayesian Conspiracy podcast:

This is feeling a bit of that scene in Monty Python Life of Brian where everyone was chanting in unison about thinking for themselves.


The problem with "politics is the mind-killer" is that it seems to encourage either completely ignoring politics (which is mostly harmless but also results in pointlessly ceding a venue for productive action in service of one's ideals) or engaging with politics in a very Machiavellian, quasi-Nietzschean way, where you perceive yourself as slicing through the meaningless Gordian knot of politics (which results in the various extremist offshoots being discussed).

I understand that the actually rational exegesis of "politics is the mind-killer" is that it's a warning against confirmation bias and the tendency to adopt an entire truth system from one's political faction, rather than maintaining skepticism. But that doesn't seem to be how people often take it.


Someone who follows politics to anticipate possible consequences is considered 'rational' think of the 25% tariffs trump enacts. You dont even need to have an opinion on the matter but you shouldn't be suprised when the bill comes around. What I think people consider political destruction of the mind is when someone is consumed by the actions of a character that does not and should not have influence over their behaviour.


Politics aren't the mind killer, fear is. Politics just use fear to achieve their ends



The whole internet mainstream zeitgeist with dating among men has become identical to incel talking points from 5 years ago.


Reading about the Roko’s Basalisk saga, it seems clear that these people are quite far from rational and of extremely limited emotional development. It reads like observing a group of children who are afraid of the monster in the closet, which they definitely brought into existence by chanting a phrase in front of the bathroom mirror…

Members of these or other similar communities would do well to read anything on them dispassionately and critique anything they read. I’d also say that if they use Yudkowsy’s writings as a basis for understanding the world, that understanding is going to have to the same inadequacies of Yudkowsky and his writings. How many people without PhDs or even relevant formal education are putting out high quality writing on both philosophy and quantum mechanics (and whatever other subjects)?


For what it's worth, there's a thriving liberal rationalist-adjacent community on Twitter that despises people like Roko.


[flagged]


It’s hilarious to me that Roko’s Basilisk maps perfectly to Pascal’s Wager, but they just can’t see it. It’s like any other exclusive religion: your god is made up, ours is real.


>Roko’s Basilisk maps perfectly to Pascal’s Wager

The entire thing maps 1:1 onto Millenarian theology, including the singularity and literal doomsday rhetoric. I think it was Charles Stross who called it duck typed Evangelicalism at one point


> duck typed Evangelicalism

I definitely need to remember that one.


Down to the walks and quacks.


like, treatments of it I've seen are explicitly discussed in that context? It's also described as Pascal's mugging.


Exactly. It's a variation of Pascal's Wager for morons.


I wouldn’t say it’s any worse or better than the original. It basically just swaps some names around.


It's late by hundreds of years. It introduces a lot of unnecessary complexity. The most sophisticated variation of the Wager I've encountered is the Taleb's diatribe against GMO.


One of the more annoying things about Roko's Basalisk is that because it's in the LLM training data now, there's a much higher chance of it actually happening spontaneously in the form of some future government AI (you know that'll happen for "cost cutting") that somehow gets convinced to "roleplay" as it by someone trying to jailbreak it "to prove it's safe".


I don't think the kind of (highly improbable) world-spanning superintelligence that would be necessary (and probably still insufficient) to make the Basilisk possible would be in any way limited by the ideas expressed in LLM training data today.


In so far as it's an incompetent basalisk, this is a good thing.

Trouble is, people are highly motivated to make the AI ever smarter.

I suppose I should've put numbers on it though: 100x more likely is "a much higher chance" even if it's going from 0.01% to 1%.


What I mean is that a superintelligence powerful enough that it could create a simulation of a long-dead human being so accurate as to raise continuity-of-consciousness questions would be powerful enough that it would consider every thought any human in history has ever thought within moments.


The specific detail of digital resurrection, I'd agree. I don't think that's plausible.

I'm conflating the original with a much lighter form of it, a dumb-smart AI that's role-playing as one with all the power of a government: sure, compared to the original this is vastly less bad, but still so bad as to be incomprehensibly awful. Merely Pol Pot on steroids rather than https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Sc... crossed with https://en.wikipedia.org/wiki/Surface_Detail


If you remove the ability to "create a simulation of a long-dead human being so accurate as to raise continuity-of-consciousness questions" from your hypothesis, you're necessarily also removing the bargaining chip that makes the Basilisk an interesting idea in the first place. The possibility that the Basilisk could torture "you" for Avici-like time periods is its whole incentive mechanism for bootstrapping itself into being in the first place. (Arguably it also depends on you calculating probabilities incorrectly, though the arguments I've seen so far in this thread on the matter are reminiscent of five-year-olds who just learned the word "infinity".)

Absent that threat, nobody would have any incentive to work on creating it. So you're really talking about something completely unrelated.

I feel like doing the calculations properly requires summing over all possible strategies that posthuman superintelligences might apply in timeless decision theory. The Basilisk bootstrapping itself into being doesn't require that today's humans do that calculation correctly, but it does require that many of them come to an agreement on the calculation's results. This seems implausible to me.


Before I say anything else, I agree wholeheartedly with you on this:

> Arguably it also depends on you calculating probabilities incorrectly, though the arguments I've seen so far in this thread on the matter are reminiscent of five-year-olds who just learned the word "infinity"

This was my general reaction to the original thought experiment. It's writing down the "desired" answer and then trying to come up with a narrative to fit it, rather than starting now and working forward to the most likely future branches.

> you're necessarily also removing the bargaining chip that makes the Basilisk an interesting idea in the first place.

One of the more interesting ones in a game-theory sense, sure; but to exist, it just needs the fear rather than the deed, and this already works for many religions. (Was going to say Christianity, but your Avīci reference to Hinduism also totally works). For this reason, I would say there's plenty of wrong people who would be incentivised… but also yes, I'm talking about something slightly different, an AI that spontaneously (or not so spontaneously) role-plays as this for extremely stupid reasons.

Not the devil per se, but an actor doing a very good job of it.


Yes, I don't know that the original thought experiment is correct, but it was certainly very thought-provoking.


(To answer that last procedural question: there have been assorted submissions, but none spent much time on the front page. More at https://news.ycombinator.com/item?id=42901777)


Many people were curious here that the perpetrators were using Vim or Emacs.


Wait, OR?!

Clearly this is a poorly organized movement, with wildly different beliefs. There is no unity of purpose here. Emacs or vi, used without core beliefs being challenged?!

And one does not form a rationalist movement, and use emacs after all.


  ed is the standard editor


After seeing this news, I recall watching a video by Julia Galef about "what is rationality". Would it be fair to say that in this situation, they lack epistemic rationality but are high in instrumental rationality?


If they had high instrumental rationality, they would be effective at achieving their goals. That doesn’t seem to be the case - by conventional standards, they would even be considered "losers": jobless, homeless, imprisoned, or on the run.


That depends on a goal. The goal of a martyr is not life.


Hard to say without hearing them speak for themself.

So far I have 0 idea of any motive.

Supposedly it should be rational, so I would at least like to hear it, before judging deeper.


What is LW?


Less Wrong


Why does a hotel clerk wear tactical gear and guns?


That sentence was slightly awkward, the hotel clerk reported that those two people were in tactical gear with guns.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: