Hacker News new | past | comments | ask | show | jobs | submit login

It's my view that "ethical concerns" in this context are confused at best, worthless virtue signaling frequently, and at worse it as an absurdly _unethical_ effort to use state power to block research and inquiry in order to protect oligopoly profits. As a bonus side effect they get used as an excuse to train in discriminatory biases, with whatever negative effects those will have-- probably mostly just undermining trust in the involved institutions.

Offer the system to consenting adults, make them aware that the output is frequently garbage, TEACH them how produce garbage to help them to recognize some of the most obvious failure modes. Done. At that point the result would be generally safer than the URL bar in user's browsers which can take them to all manner of harmful things, including potentially other LLMs -- or the most dangerous of intelligences: contact with other human beings.

Mentally ill people, violent people, etc. will always do unfortunate things involving whatever things are in proximity to them. The internet isn't "safe", libraries aren't "safe", the _world_ isn't "safe".

But even if the world itself were safe people would still manage harm themselves and others with it with it, because people aren't safe. No one bothers writing about people who committed suicide after losing a game of chess to a child or watching sad movies, but those things happen too. It's a big world and so you name it, someone has probably killed themselves over it or with it or killed someone else because of it.

Go look at the mobile 'games' offered in the play store. Can you look at google raking in tons turning children into click-a-holics and tell me seriously that you think "safe" was ever a significant objective?

It's only big in the context of ML because there is a socially well connected apocalypse cult ( https://archive.is/eqZx2 ) pushing the idea that today's ML are materially unsafe in ways that other things aren't and it's a commercially convenient narrative to protect oligopolies.




I guess your view is that if a new thing is approximately as terrible as an existing thing, then it's fine. Mine is that new things are a rare opportunity to make the world less terrible, so we should take a swing at it.


Fair position, but I say: What is somewhat rare is the opportunity to make large changes, but it's usually a false opportunity because large changes are generally ill-advised. However, large changes aren't the only ones we can make: We can and should always improve and if we do the improvements don't need to be large. That this is good because large improvements often don't work-- because people route around them due to their cost or unfamiliarity-- or don't do what we thought they'd do-- because we're not as smart as we think we are and we can easily make things worse through our efforts to do better.

So that's why I suggest things like I think it would be reasonable to put LLMs behind education in a way that we don't for search boxes or URL bars. I think anything we do to make us all more savvy consumers of information makes the world less terrible. Yet it's an improvement that is unlikely to backfire, doesn't present much incentive to route around, still effective if routed around somewhat, doesn't significantly impede forward progress, or increase costs tremendously.

I can empathize with the frustration that there are so many unfixed things that can be improved, but I'm confident in mankind that if we all keep nudging in the right direction that we'll get there-- and get there sooner than we would by attempting a Great Leap Forward that has too great a chance of disaster, too great a chance of stopping our progress, setting us back, or coming at too great a cost.

Like in software where the most complex code you can write is code you can't debug/maintain because you've got to be 10x smarter to debug it than write it, in our cultural progress we can imagine advances so much greater than we can accomplish safely and sustainably in practice. But what we can accomplish brought us to where we are now and we should feel proud of it and confident in the future, and know that no matter how much better we make things we'll still think they're terrible and find ways to improve things.


You seem to be going around to everyone who replied to you yesterday and responding in anger. If you're having a bad day, it might be better to not reply.


wpietri's response to me was in no way in anger, and his position in it is a perfectly legitimate one.


I definitely took reducing your entire comment to to "I guess your view is that if a new thing is approximately as terrible as an existing thing, then it's fine." as not coming a place of good faith, but maybe I'm being harsh




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: