Hacker News new | past | comments | ask | show | jobs | submit login

Anyone else come to the realization that when spokespeople talk about "AI Safety" they aren't concerned with the skynet-esque enslavement of mankind or paperclip maximizing, but that controls be in place that prevent people from using the technology in a way that is misaligned with the maximum extraction of profit?



I've mentioned in past posts that the terms "safety" when applied to the AI discussion are never quantified or qualified in any way.

It's more of a smear. If you're the one arguing from an AI safety standpoint, it means your opponent is not being safe and is dangerous.

You should be expected to articulate why your position is "safer" and how "safe" is defined.

In my opinion most AI safety arguments are about vaguely defined variables like speed: "it's going too fast" - to who? who defines what fast is? do you slow down 2x or 10x? do competitors and/or rival nations less concerned zip by and the end result is still the same?

There is some validity to the discussion if AI is being "racist" or "biased" (again however you define these) but again competitors and/or rival nations may be less concerned and the end result may still be the same.

If anything the "safest" option is to define what your concerns are, then race ahead to try to be the canonical solution or defined standard in order to set standards for the rest to follow or be bound by.


We need open source models that are trained on the same data and that can be built from source and run on lightweight machines like mobile devices.



At this point, I suspect that "AI Safety" has taken on the practical meaning of "regulations to make sure the big guys are the ones who benefit"... which is probably true anyways because of the capex needed to run these massive LLMs, but I am sure they would still like a moat or two against, e.g. the Chinese.

Otherwise I don't see what this means now; that the LLMs e.g. dont use racist terms? ok, great, nice, but how is that anything more than what you need to do on the web now anyway? how's that related to "AI" at all?

What I'd love to see is that this gambit backfires and instead we start talking about "tech safety" and that creates actual regulations with teeth that cut down the techzillas a bit (or a lot).


Its like this with a lot of industries. E.g. Meta is probably interested in social media “safety” to the extend it moats them legal protections and allows them to define the regulatory environment they operate within through lobbying. Likewise for automakers or any other business with a chance for harm.


Your "realization" is counter to everyone on the anti-safety side being monetarily incentivized (VC's, OpenAI employees with $1m+ pay packages, startup founders seeking to get rich) compared to the people that never aimed to profit at all like Helen Toner, Yudkowsky, Bengio, and Hinton.


Please stop framing your own personal cynicism as a grand narrative revalation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: