Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That doesn't make sense to me. Would rather you have it in the hands of people who think a lot about safety, but might be compelled to give it to bad actors, or would you rather just give it to bad actors right away?

It's not a zero-sum game where you can level the playing field and say everything's good.



I'd rather have it in the hands of everybody so that we can decide for ourselves what this means for safety, everyone can benefit from the new technology without restriction, and so that we are not dependent on someone else's benevolence for our protection or for access to powerful new technology.

Leveling the playing field won't instantly make everyone safe, but leaving it uneven certainly doesn't either.


It's not clear to me how your argument would work for GPT-4 when it's clearly not reasonable for nukes.


We elect the people with the nukes (in theory). Don't remember electing OpenAI.

Dito for the sewage/water system or other critical infrastructure.

Not saying OpenAI needs to be elected or not, just expanding on what (I think) they meant.


This is the same argument people use against the 2nd amendment, but it fails for similar reasons here.

If we accept that the public having access to GPT-4 has the same level of risk as the public having access to nukes would than I'd argue that we should treat GPT-4 the same way as nukes and restrict access to only the military. I don't think that's the case here though and that since the risks are very different, we should be fine with not treating them the same.


The counter for nukes is nobody should have nukes. Anybody trying to build nuclear weapons should be stopped from doing so, because they're obviously one of the most catastrophically dangerous things ever.

At least with ai you can cut the power, for now anyway.


We can use nukes to generate EMPs to take out the AI


Nonproliferation is practical with nuclear weapons.

With something that can be so trivially copied as a LLM that isn't possible.

So in this scenario, one could argue that ensuring equitable distribution of this potentially dangerous technology at least levels the playing field.


It's not practical. The NPT is worthless, because multiple countries just ignored it and built their nukes anyway.

North Korea is dirt poor and they managed to get nukes. Most countries could do the same.


It does. Mutually Assured Destruction (MAD)

https://en.m.wikipedia.org/wiki/Mutual_assured_destruction


That's not everyone. That's major strategic powers. If everyone (in the literal meaning of the term) had nukes we'd all be dead by now.


The nuke analogy only applies if the nukes in question also work as anti-nuclear shields. It's also a false equivalency on a much broader fundamental level. AI emboldens all kinds of processes and innovations, not just weapons and defence.


AI of course has the potential for good—even in the hands of random people—I'll give you that.

Problem is, if it only takes one person to end the world using AI in a malevolent fashion, then I think human nature there is unfortunately something that can be relied upon.

In order to prevent that scenario, the solution is likely to be more complicated than the problem. That represents a fundamental issue, in my view: it's much easier to destroy the world with AI than to save it.

To use your own example: currently there's far more nukes than there are systems capable of neutralizing nukes, and the reason for that owes to the complexities inherent to defensive technology; it's vastly harder.

I fear AI may be not much different in that regard.


It's not a false equivalency with respect to the question of overriding concern, which is existential safety. Suppose nukes somehow also provided nuclear power.

Then, you could say the exact same thing you're saying now... but in that case, nukes-slash-nuclear-energy still shouldn't be distributed to everyone.

Even nukes-slash-anti-nuke-shields shouldn't be distributed to everyone, unless you're absolutely sure the shields will scale up at least as fast as the nukes.


I wonder how this would work for nuclear weapons secrets.


I think it's okay to treat different situations differently, but if someone were able to make the case that letting the public have access to GPT-4 was as risky as handing the public all of our nuclear secrets I'd be forced to say we should classify GPT-4 too. Thankfully I don't think that's the case.


But if this tool is as powerful as Microsoft says, then an average nuclear physicist in a hostile state will now be more easily able to workout your nuclear secrets (if they exist)?

I'm actually starting to wonder how long these systems actually stay publically accessible?

On the other hand, people might be able to use these machines to gain better insights into thwarting attacks...seems like we're on slippery slope at the moment.


My guess is that eventually our devices will get powerful enough, or the software optimized enough that we can build and train these systems without crazy expensive hardware at which point everyone will have access to the technology without needing companies to act like gatekeepers.

In the meantime, I expect our every interaction with this technology will be carefully monitored and controlled. As long as we have to beg for access to it, or are limited to what others train it on, we'll never be a threat to those with the money and access to use these tools to their full potential.

I think universities might help serve to bridge the gap though, as they have in the past when it came to getting powerful new technology into the hands of the not-quite-as privileged. Maybe we'll see some cool things come out of that space.


People who think a lot about safety are the bad actors when 1. there are incentives other than safety at play and 2 . nobody actually knows what safety entails because the tech is so new




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: