Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your analogies don't quite align with this technology.

We've had exposure to propaganda and disinformation for many decades, long before the internet became their primary medium, yet people don't learn to become immune to them. They're more effective now than they've ever been, and AI tools will only make them more so. Arguing that more exposure will somehow magically solve these problems is delusional at best, and dangerous at worst.

There are other key differences from past technologies:

- Most took years to decades to develop and gain mass adoption. This time is critical for society and governments to adapt to them. This adoption rate has been accelerating, but modern AI tech development is particularly fast. Governments can barely keep up to decide how this should be regulated, let alone people. When you consider that this tech is coming from companies that pioneered the "move fast and break things" mentality, in an industry drunk on greed and hubris, it should give everyone a cause for concern.

- AI has the potential to disrupt many industries, not just one. But further than that, it raises deep existential questions about our humanity, the value of human work, how our economic and education systems are structured, etc.

These are not problems we can solve overnight. Turning a blind eye to them and vouching for less regulations and more exposure is simply irresponsible.



> vouching for less regulations and more exposure is simply irresponsible.

We let people buy 6,000 pound vehicles capable of traveling 100+ mph.

We let people buy sharp knives and guns. And heat their homes with flammable gas. And hike up dangerous tall mountains.

I think the LLM is the least of society's worries and this pervasive thinking that everything needs to be wrapped up in bubble wrap is what is actually dangerous.

Can a thought be dangerous? Should we prevent people from thinking or being exposed to certain things? That sounds far more Orwellian.

If you want to criminalize illegal use of LLMs for fraud, then do that. But don't make the technology inaccessible and patronize people by telling them they're not smart enough to understand the danger.

This is not a "fragile world" technology in its current form. When they're embodied, walking around, and killing people, then you can sound the alarm.


There's a vast middle ground between completely unregulated technology and an Orwellian state. Let's not entertain absolutes.

> We let people buy 6,000 pound vehicles capable of traveling 100+ mph.

> We let people buy sharp knives and guns. And heat their homes with flammable gas. And hike up dangerous tall mountains.

All of those have regulations around them, and people have gotten familiar with how they work. More importantly, they're hardly disrupting to our lives as AI technology has the potential to be.

We didn't invent airplanes and let people on them overnight. It took decades for the airline industry to form, and even more for flights to be accepted as a standard form of transportation. We created strict regulations that plane manufacturers and airlines must follow, which were refined over the 20th century.

Was this unnecessary and Orwellian? Obviously the dangers of flight were very clear, so we took precautions to ensure the necessary safety. With AI, these dangers are not that clear.

> If you want to criminalize illegal use of LLMs for fraud, then do that. But don't make the technology inaccessible and patronize people by telling them they're not smart enough to understand the danger.

It's far from patronizing; it's just reality. People don't understand the dangers of the modern internet either, yet they're subjects of privacy violations, identity theft, scams, and all sorts of psychological manipulation from advertising and propaganda that influences how they think, vote and behave in society. Democratic institutions are crumbling, sociopolitical tensions are the highest they've been in the past 30 years in most western countries, and yet you would be fine with unleashing a technology that has a high chance of making this worse? Without any guardrails, or even some time for humanity to start addressing some of the existential questions I mentioned in my previous post? Again, this would be highly irresponsible and dangerous.

And yet I'm sure that's what's going to happen in most countries. It's people who think like you that are pushing this technology forward, and unfortunately they have a strong influence over governments and the zeitgeist. I just hope that we can eventually climb out of the hole we're currently digging ourselves into.


> We created strict regulations that plane manufacturers and airlines must follow

In response to actual incidents, not imagined ones. Regulations should not come first. We already have the biggest companies chasing after a regulatory moat to protect themselves from competition and commoditization, and that's not how this should work.

> we took precautions to ensure the necessary safety.

No we didn't! We used the technology, we made lots of mistakes, and learned over time. That's how it's been with every innovation cycle. If we regulated from day one, maybe we would slowed down and not reached the point we are today.

Europe is a good model for a presumptive, over-regulated society. Their comparable industries are smaller and lag behind our own because of it.

> People don't understand the dangers of the modern internet either,

People "don't understand" a lot of things, such as the dangers they expose themselves to when driving over 30 mph. Yet we don't take that privilege away from them unless they break the law. Laws that only bare teeth after the fact, mind you.

Imagine if we tried to "protect society from the internet" and restricted access. The naysayers of the time wanted to, and you can find evidence if you look at old newspapers. Or imagine if we had a closed set of allowed businesses use cases and we didn't allow them to build whatever they wanted without some official process. There would be so many missing pieces.

Even laws and regulations proposed for mature technologies can be completely spurious. For instance, all the regulations being designed to "protect the children" that are actually more about tracking and censorship. If people cared about protecting the children, they'd give them free school lunches and education, not try to track who signs up for what porn website. That's just building a treasure trove of kompromat to employ against political rivals. Or projecting the puritanical dreams of some lawmakers onto the whole of society.

> People [...] they're subjects of [...] all sorts of psychological manipulation from advertising and propaganda that influences how they think, vote and behave in society

> Democratic institutions are crumbling

So this is why you think this way. You think of society as a failing institution of sorts. You're unhappy with the shape of the world and you're trying to control what people are exposed to and how they think.

I don't think that there's any amount of debate between you and I that will make us see eye to eye. I fundamentally believe you're wrong about this.

We live as mere motes of dust within a geologically old and infinite universe. Our lives are vanishingly short. You're trying to button down what people can do and fit them into constructed plans that match your pessimistic worldview.

We need to be free. We need more experimentation. We need more degrees of freedom with less structure and rigidity. Your prescriptive thinking lowers the energy potential of society and substitutes raw human wants with something designed, stamped, and approved by the central authority.

We didn't evolve from star dust to adventurous thinking apes just to live within the shackles of other people's minds.

> unleashing a technology that has a high chance of making this worse

You are presupposing an outcome and you worry too much.

Don't regulate technology, regulate abusive behavior using the existing legal frameworks. We will pass all the laws we need as situations arise. And it will work.

> eventually climb out of the hole we're currently digging ourselves into.

We're not in a hole. Stop looking down and look up.


> In response to actual incidents, not imagined ones.

Not _just_ in response to actual incidents. Airplanes don't need to crash for us to realize that flying is dangerous. Pilot licenses were required as early as 1911[1], years before the commercial aviation industry was established. International regulations were passed in 1919[2], when the aviation industry was still in its infancy.

> Europe is a good model for a presumptive, over-regulated society. Their comparable industries are smaller and lag behind our own because of it.

The EU is one of the few governments that at least tries to protect its citizens from Big Tech. Whatever you think it's lagging behind in, I'd say they're better off for it. And yet even with all this "over-regulation", industry-leading giants like ASML and Airbus, and tech startups like Spotify, Klarna and Bolt are able to exist and prosper.

> Even laws and regulations proposed for mature technologies can be completely spurious.

I'm not saying that more regulation is always a good thing. I lean towards libertarianism. What I do think is important is to give enough time for society and governments to catch up to the technology being put out into the world, especially with something as disruptive as AI. We should carefully consider its implications, educate people and new generations about its harms and merits, so that our institutions and communities can be better prepared to handle it. We obviously can't prepare for everything, but there's a world of difference between a YOLO approach you seem to be suggesting, and a measured way forward.

A counterpoint to a lax stance on industry regulation can be seen with Big Tobacco. It took many decades of high mortality rates related to smoking for governments to pass strict regulations. Tobacco companies lied and fought those regulations until the very end, and now they're still prospering in countries with lax regulations, or pivoting to more acceptable products like vapes.

My point is that companies don't care about harming society, even when they know their product is capable of it. Their only goal is increasing profits. With products like AI this potential harm is not immediately visible, and we might not discover it decades from now, perhaps when it's too late to reverse the damage.

> You think of society as a failing institution of sorts.

Not society as a whole. But if you don't see a shift away from democratic governments to authoritarianism, and an increase in sociopolitical tensions worldwide over the last decade in particular, we must be living in different worlds. If you also don't see the role the internet has played in this change, and how AI has the potential to make it significantly worse, I'm not going to try to convince you otherwise.

> You are presupposing an outcome and you worry too much.

It doesn't take a genius to put 2 and 2 together. :) I honestly don't worry much about this, but I am dumbfounded about how more people aren't ringing alarms from rooftops about the direction we're heading in.

> Don't regulate technology, regulate abusive behavior using the existing legal frameworks.

Technology moves too fast for legal frameworks to exist. And even when they do, companies will do their best to avoid them.

> We will pass all the laws we need as situations arise. And it will work.

It _might eventually_ "work". In the meantime, those situations that arise could have unprecedented impact that could've been avoided given some time and preparation.

What you seem to be ignoring is that modern technology is not just about building TVs, cars and airplanes. We came very close to a global thermonuclear war in the last century, and tensions are now rising again. Allowing technology to advance with no railguards or preparation, while more countries are ruled by egomaniacal manchildren, is just a recipe for disaster. All I'm suggesting is that it would be in our collective best interest to put more thought into the implications of what we're building.

[1]: https://newenglandaviationhistory.com/tag/connecticut-aviati...

[2]: https://en.wikipedia.org/wiki/Paris_Convention_of_1919


Quite a lot of whataboutisms and straw men in your post, let's stick to LLMs, as that was the original topic.


It's contextualization, not a logical fallacy.

Let's stop treating LLMs as spooky.


Lots of things that aren't spooky are yet dangerous. Matches, for example. Scissors. Covid and A-10 Warthogs.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: