There's really no need to be so hostile. Do you complain that disney doesn't make hardcore porn too? Like it or not claude and all these other ai's that are censored do have a place in the world. And they do serve more than the "bottom 5%". At the bare minimum they serve more than the bottom 5% when it comes to capital which is definitely more important from a company pov. There are plenty of uncensored models out there to play with. They aren't quite there yet but they're decent enough and slowly getting better. I highly doubt they will ever have the same reach as claude, chatgpt or bard. The barrier of entry is too high for the normal person and even for some technical people. I'd love to be proven wrong but my money is on the multi billion dollar cooperations.
And your comment regarding the moral policing that wasn't there in the early days of the internet is just ridiculous. Putting aside the fact that there's nothing wrong with having a safe space where you don't have to see or read things you don't want to. The internet was a terrible place, it still is but the bad places have gotten much smaller. It didn't grow to what it is now because of all the trash that was being spewed out, it grew despite it.
The only thing I don't agree with is them trying to use the government to prevent innovation but other than that I see no issues with what they're doing.
Your frustration is understandable, believe me, I get it. I've tried to wrangle many AI to answer my relatively tame questions without the positivity bias and warnings that my actions may cause some sort of imagined harm. But I just make note of that behavior and move on. These policies the companies have will NEVER change as long as they keep making money. Always vote with your wallet and your time.
Generally I would side with anti-hostility sentiment, however in this case and at this point the intolerance towards the vocal minority that pushes the “no offensive content” narrative has be firm and the position of reason, expressed by the GP should be known and appreciated as a mainstream.
With generative AI and its pace we are approaching the point of no return where those 5% dregs would be responsible for irreparable damage to society and culture at large.
One thing is not engaging in conversations you know ppl may not enjoy (“be offended”). Having those shallow, uneducated attitudes embedded in the culture at source code-level is a cultural catastrophe.
Nobody’s talking about showing hardcore porn to kids, the battle is for not having opaque codes built into the fabric of thought because of the cries handful of overactive Twitter users.
How about this: make a bloody separate safe space for kids. This should go for AI, this should go for web search (maybe), this should go for YouTube, etc. Don't force adults to the lowest common denominator of safe contents, don't make it likely that children will see something they should not.
I assume you are referring to kids bypassing censorship - short of surveillance, this will be impossible. If the child chooses to access this content, that's not really stoppable. But we can prevent kids from being exposed to unsafe content accidentally by curating content available to them.
The problem is, platfrom rules prevent unsafe content from being posted and watched by anyone, not just children. And that's concerning, because YouTube is the defacto public utility for video hosting in all but semantics, just like Twitter and Facebook are the defacto "digital town square" type services.
It is concerning, I get it. I mean I loved jailbreaking ChatGPT for a while there. However, this is one those very common and very real lesser of two evils situations.
I am not excusing any behavior by anyone here. At the same time, I am glad that I’m not in a position to have to make this type of decision.
It's a chatbot. A very advanced one with incredible capabilities but a chatbot nonetheless. What irreparable damage to society and culture are you alluding to? Any specific examples would be nice as I don't see where all this doom and gloom is coming from.
Twitter is "just a microblogging service", and yet pushed social norms around so much on certain topics that people from a decade or more ago would be surprised at the impact. It's now considered safer to say nothing online for the fear of your message being weaponised against you 10 years from now because of changes to moral values, so many people just aren't their authentic selves online any more - unless under a pseudonym.
Another example - when I was using a voice assistant a lot, I eventually gained an instinct to call out to it as if it were second nature - and if the voice assistant is not under my control (which it wasn't), that has serious privacy repercussions, which can change my behaviour further.
Another example: Ukrainian war footage on YouTube is now "inappropriate", even though it used to be widespread at first. As a result, creators stopped discussing it, or moved that content to paid platforms. As a result, the world's largest video library has a gaping hole right where this kind of content should be available for historical use.
Whether it's fast or slow, technology damaging the human society and culture has already been observed for things smaller in impact than LLMs.
> Twitter is "just a microblogging service", and yet pushed social norms around so much on certain topics that people from a decade or more ago would be surprised at the impact.
With every technological advancement or major world event, social norms move around. This is nothing new, before social media, it was the cell phone, before that it was the computer, or the war on drugs or the cold war or world war II. It's just social media's turn to take the blame for the "degradation of family values" or whatever the hot talking point is.
> It's now considered safer to say nothing online for the fear of your message being weaponised against you 10 years from now because of changes to moral values, so many people just aren't their authentic selves online any more
That's how it's always been. Nothing goes away from the internet. If you posted questionable content online under your real identity and a potential employer finds it and decides not to hire you, well, that's the price you pay for not being anonymous. Kids were often told to be careful of what they put online for at least the past 20 years. If thing X is no longer acceptable in current time frame and you said thing X years ago and it's now been brought up. You can either clarify your point if your opinions haven't changed, ignore it, express regret for making a comment on thing X and move on with your life. This is a personal accountability problem, not a societal problem. Although, I personally think that anonymity should be the default on the internet but that bridge has already been burned. Regarding people not being authentic in a large public forum, I mean, what do you expect? The entire world can see what they're posting. It's not unreasonable to show off the best parts of your life, the parts you are most proud of and to hide the struggles. Look for smaller private communities if you want real human connection.
> Another example - when I was using a voice assistant a lot, I eventually gained an instinct to call out to it as if it were second nature - and if the voice assistant is not under my control (which it wasn't), that has serious privacy repercussions, which can change my behaviour further.
You do know that you don't have to use the voice assistant right? This sounds like a you problem to be honest. It's common knowledge that these voice assistants are a privacy nightmare. Even tech illiterate people that I've met know about it but they make the trade off for convenience. It's not what I would do but its their choice.
> Another example: Ukrainian war footage on YouTube is now "inappropriate", even though it used to be widespread at first. As a result, creators stopped discussing it, or moved that content to paid platforms. As a result, the world's largest video library has a gaping hole right where this kind of content should be available for historical use.
So this is the only somewhat decent example you posted but youtube isn't a library, it's a video sharing platform. I will admit, I'm ignorant on the whole ukraine topic. But war is "inappropriate" to say the least and I can see why companies wouldn't want their advertisements to be associated with such horrific events, even if it is in a positive light. And if companies don't like it, then youtube won't push it to people's feed. I would argue it's good that they have moved to different platforms, the content is now more resilient to being taken down. And just for a quick sanity check, searching for ukraine brings up all sorts of videos. Press conferences, combat footage, people's reactions. So what exactly is being hidden?
Well considering these chatbots are half as successful as their proponents suggest, we’ll have ppl relying on them as the first and only source of information.
Imagine chatgpt replaces google search and with the “anti-offensive” bullshit baked in it’s literally impossible to find a certain type of information. Now some of it may be illegal and that’s one thing, but the despicable 5%ers we are alluding to are advocating to bake in ETHICS into those search engine, and withold information (irrevocably, bc who in the future will ever care to revert this stuff?) based on ETHICS. Ethics, most ephemeral of all crutches for intellectually constrained.
You're being a bit hyperbolic, but that sounds like an education problem. It's not a cooperations job to teach you how to look at sources. Besides, most people already get their news from second hand sources or worse. If hallucination issues are ever solved and the models learn to cite sources properly then it will have a lot more value but I doubt it will get any worse than it is right now.
And search engines aren't going anywhere. If the model gives an answer that you aren't sure about, you do a search on your favorite search engine, refine your search or switch search engines. Even for the tech illiterate people using google, they can just go to page 2 and they'll eventually find what they are looking for.
There's never been a time in history than right now where you can find like minded people easily with fringe interests and opinions. And as long as the internet exists that's not going to change. This future you are painting where information is being withheld doesn't exist. It doesn't matter how many book burnings there are, once the information is out. It's out for good.
> The internet was a terrible place, it still is but the bad places have gotten much smaller
I never said it was "good" or "bad" I said it would have been more vapid and silo'd, which as you seem to have conflated, that has certainly happened. That's fine but the fact you can't disable a censor on out of an API you pay for as an adult is a poor choice by these companys regardless of how much "bad" stuff we get exposed to on places we visit online.
But you are an adult and you don't have to support or pay for any of these products if you don't want to. It's simply a design choice. The same as if someone would choose python over c++. Maybe by your standards it's a poor choice but if it's making them money then there's no problems. There is a demand for uncensored models but its just not profitable.
That's a completely nonsensical argument. OpenAI and Anthropic are THE duopolists in this market, period. Nothing else comes close on most metrics.
Saying "it's simply a design choice" is like saying that wearing wooden clogs around the city is a "design choice" because the only other two companies that know how to work with more advanced materials refuse to sell you shoes because of a moral reasons.
What a terrible analogy. First of all, the two best shoe companies are selling the SAME shoes to everybody, they aren't denying you service. You just don't like the color that it's being sold in. You willingly choose to wear wooden clogs and then complain that they won't sell you the shoes that they are selling to everybody else. The entitlement not just in this comment but in the entire thread is ridiculous.
Secondly, they are private companies selling non-essential goods. Using their models isn't a right. If they want to lobotomize it and make it as pg13 as possible, that is their right. They aren't doing anything against the law in regards to discrimination. If you want to use their toys, you have to play by their rules. Yes they have the best toys but you don't get to cry about it and force them to change their rules. It's such a simple concept and it's hardly new.
There's really no need to be so hostile. Do you complain that disney doesn't make hardcore porn too? Like it or not claude and all these other ai's that are censored do have a place in the world. And they do serve more than the "bottom 5%". At the bare minimum they serve more than the bottom 5% when it comes to capital which is definitely more important from a company pov. There are plenty of uncensored models out there to play with. They aren't quite there yet but they're decent enough and slowly getting better. I highly doubt they will ever have the same reach as claude, chatgpt or bard. The barrier of entry is too high for the normal person and even for some technical people. I'd love to be proven wrong but my money is on the multi billion dollar cooperations.
And your comment regarding the moral policing that wasn't there in the early days of the internet is just ridiculous. Putting aside the fact that there's nothing wrong with having a safe space where you don't have to see or read things you don't want to. The internet was a terrible place, it still is but the bad places have gotten much smaller. It didn't grow to what it is now because of all the trash that was being spewed out, it grew despite it.
The only thing I don't agree with is them trying to use the government to prevent innovation but other than that I see no issues with what they're doing.
Your frustration is understandable, believe me, I get it. I've tried to wrangle many AI to answer my relatively tame questions without the positivity bias and warnings that my actions may cause some sort of imagined harm. But I just make note of that behavior and move on. These policies the companies have will NEVER change as long as they keep making money. Always vote with your wallet and your time.