Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>[D]oesn't mean they don't exist

Never said they don't exist, merely that they are "minimal", aka no other policy I could think of seems like it would obviously lead to lower costs while still achieving the desired outcome.

>I'm also talking about the cost that the system imposes involuntarily on others who are neither actually guilty nor bounty seekers

This is a case from negative externalities. First, consider the simple argument from scale. If you buy the idea that a Bostrom-like AI is both (1) very likely to be created on our current technological trajectory, and (2) will probably kill us all, then it's not hard to argue that the benefits reaped from avoiding that fate justifies a similarly high cost to society, maybe several percentage points of global GDP. After all, you're not just risking deflating present-day GDP, you're risking multiplying all future GDPs by 0. Every country in the First World already imposes tremendous involuntary costs on people for things of much less significance, like 'forcing' you to go through TSA even though you would never in your wildest dreams try to hijack a plane, so the mere existence of involuntary costs doesn't sway me.

Alright, but what would the magnitude of those involuntary costs be? If this policy costs everyone a hundred bucks a year in hassle, we've still got a vexation. There is strong reason to believe that, for almost everyone, it really would be very, very low in absolute terms. How much of the population is currently engaged in frontier-pushing AI research right now? 1%? 0.1%? Actually probably a few orders of magnitude lower. OpenAI still employs less than, what, a thousand people, etc. etc.

The vast majority of people will never do anything remotely like that in their lives. So one would expect very cheap private insurance policies to appear as an effective way to get out of being pursued and harassed by private bounty hunters. The firms which pop up to provide this service would probably get very, very good at negotiating with private bounty hunters very, very quickly, to leave anyone not directly in the know out ASAP. The cost for almost everyone would be on the order of cents per year of protection, that's how unlikely it is that some random clerical worker in Kansas has any serious involvement in creating the next self-improving AI. In exchange, of course, these insurance policies have a very strong reason not to insure people who actually are involved in such activities, and so they could form a critical source of information for helping the bounty hunters target their own search.

>A major source of the international threat is governmental, where private bounties aren’t going to work at all

Strongest argument I've heard so far, thank you for raising it.

First I'll point out it would already be a dream come true for extending our AI doom timelines if the only people who can actually do AI research without fear of being extradited to a bounty-friendly nation is to work in a government lab. The Department of Defense is very impressive, but they still don't move nearly as quickly as private industry and independent universities do when it comes to work like this. That could be generations more of humanity around to live and love and prosper before we get snuffed out. Don't let the perfect be the enemy of the good!

Let's get serious. AI researchers in your home country are the easiest case, because you have unilateral law on your side. AI reseachers in other countries are quite a bit more difficult, because now you're in the messy world of international diplomacy. If the other country adopts a bounty law as well, you both win. If neither of you do, you both lose. But what about the case where one of you does, and one of you doesn't? I posit that here, in the end, you probably have to make it so the bounty-friendly nation is the one that wins, with force - that is, allowing bounty hunters to turn in and extract money from even foreign employees if they get within your borders. And, yes, if the other governments decide to respond by closing their private AI businesses and opening up government labs only ... Well, you've slowed the wave quite a bit, but you're probably going to have to be more careful. No one ever said shifting the Nash equilibrium would be easy.

But you do have other options, even here. One hazy possibility in my mind, would be offering US or EU citizenship to any foreign national who is both (1) a credible AI researcher and (2) precommits to stopping their research as soon as they take the offer. Bounty hunters win because (duh) you now have a heavily pre-filtered list of marks you can watch like a hawk for the instant they slip up. And chances are good that someone on that list will, even after getting citizenship, and then you can extract a tidy sum from them for minimal effort. The foreign researchers who take the offer win because living and working in e.g. New York City as an employee of Jane Street is probably much nicer than working on recursviely self-generating AI in a secretive, underpaid, underfunded, underground lab in e.g. Chengdu. (It's important to remember that cutting edge AI researchers are, almost by definition, really really smart and really really good with computers and math. They have a uniquely great set of careers they can switch into easily.)

The world wins because the risk of self-caused extinction goes down another iota. China "loses", but it loses in the smallest way possible - it decided to undertake risky research instead of telling its citizens to choose something more straightforwardly good for humanity, and it suffered a bit of brain drain. That's aggravating, but it's hardly worth launching a China v. NATO war over. And hey, if China wants to stop the brain drain, they already have a good model for a very effective law they could implement to get people to stop doing dangerous research - that same bounty law we've been discussing.

I freely admit this is the weakest part of my theory, becuase it's the weakest part of anyone's theory. International stuff is always much harder to reason about. Still, whereas most policies I've seen put forward seem to fail instantly and obviously, mine seems only probably destined to fail. That's a big improvement in my eyes.



> First I’ll point out it would already be a dream come true for extending our AI doom timelines if the only people who can actually do AI research without fear of being extradited to a bounty-friendly nation is to work in a government lab.

Yeah, the problem with AI doomers is that they let fantastic baseless estimates of p(doom) drown out much more imminent risks, such as AI asymmetry facilitating tyranny, which is an immediate, near-term, high-probability risk.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: