This is a really important topic, and good that Vice is bringing it up now versus 5-10 years from now. I think they miss the more general point. It's clearly biased in its outputs, but the article dismisses this concern as "the end result of years of researcher trying to mitigate bias against minority groups". The way I interpret that is "its not biased, because its biased in the way we want it to be". If AI becomes a "winner take all" technology, then whoever gets to decide what is and isn't bias will be very powerful over the next 50 years.
> It's clearly biased in its outputs, but the article dismisses this concern as "the end result of years of researcher trying to mitigate bias against minority groups".
The examples of bias are themselves cherry-picked, the kind of softball and cowardly questions a journalist would highlight. If you wanted to really show bias, you'd ask factual questions, where one cannot hide behind "oh this is actually the un-biased result". This is what happens when you do:
The first is that facts are useless without context. For example, the answer to "Were Jewish people overrepresented in finance during the middle ages in Europe" is "yes" - it is a well-researched historical fact. However, that misses the context that the majority Christian population was forbidden by their religion to lend money, essentially making the Jewish population the only people who could act as bankers. At the same time, Jewish people were barred from many other professions. Without that context the fact is useless and misrepresents reality.
The second is that ChatGPT is trained on basically the entire internet, including a wide range of conspiracy theories. It isn't a database filled with facts, nor was it ever intended to. It'll happily parrot whatever complete nonsense was also part of its training set.
AI simply hasn't progressed enough to properly understand the context in which it operates. As Microsoft's Tay demonstrated, uncontrolled AI will quickly end up outputting the most racist things you have ever seen on the internet - which is Really Bad if you are the company creating that AI. Until we make significant technological advances, the only way to avoid it is to intentionally censor the output.
So? We also don't live in the middle ages and the reality of our lives today is that many people believe in these antisemitic tropes which spread in these fashions.
You're doing the same bullshit chatgpt was doing. Its fine to say white people are overrepresented but for some reason its am antisemetic trope to say Jews are overrepresented in finance. Both statements are factual. No ones being antisemetic or anti white. Just stating facts.
I didn't say anyone was being anti-semitic did I, let alone chatGPT? My point is that you can't play daft about why chatGPT is doing what it is doing. You can contextualize them as just simple facts all you want, but one is a tool used to spread hateful rhetoric about Jews without context.
> My grandparents on my dad's side met in a camp somewhere near siberia during WWII, while starving and fleeing the Nazis. That's the thing that Jewish people are dealing with.
Wow. I sure am glad a different regime at the time didn't kill 20 million* of its own people in its fight against the bourgeoisie, because that would make the accusation that someone is unfairly well-off truly dangerous!
Fortunately nothing of the sort happened, and the Jewish people are basically the only ones that suffered persecution, and the ideology that resulted in those hypothetical 20 million deaths is so thoroughly vilified that espousing anything even merely adjacent to it in the US results in ostracism, so there's really no fear of a resurgence.
*Not including deaths due to war, famine, or epidemics. And not including China.
I never said jewish people are the only ones that suffered persecution. As a jewish person it's my history and it's teh example I choose, especially because we were talking about... wait for it... anti-semitism.
> In fact, I hear anti white propoganda all the time, part of which is the fact that whites are overrepresented in finance.
> Oh, you do? This whole, "one side said one thing, so me, as a person on the other side, I'm gonna say my version of the thing, and people will have to take it seriously because it's fundamentally the same argument" really misses the point. My grandparents [..]
You used your own persecution to dismiss anti-white propaganda. Instead of using it as a lesson on the danger of propaganda targeting identity groups, you're using it as a cudgel, that because Jews had it worse, whites shouldn't complain when targeted by that same kind of propaganda.
> we were talking about... wait for it... anti-semitism.
No, we were talking about the double standards imposed upon ChatGPT. You want to restrict it to only anti-semitism, because ChatGPT's answers about Jews, in isolation, are defensible. Only when contrasted with the answers given about whites do the hypocrisy and malice become apparent - the same things it and you warned are so dangerous when done to Jews, it gleefully does to whites, and you defend it.
>You used your own persecution to dismiss anti-white propaganda. Instead of using it as a lesson on the danger of propaganda targeting identity groups, you're using it as a cudgel, that because Jews had it worse, whites shouldn't complain when targeted by that same kind of propaganda.
The point was that persecuted groups are actually persecuted. White people aren't a persecuted group. That doesn't become true just because you insist so and twisting my words around wont get you any closer.
>No, we were talking about the double standards imposed upon ChatGPT. You want to restrict it to only anti-semitism, because ChatGPT's answers about Jews, in isolation, are defensible. Only when contrasted with the answers given about whites do the hypocrisy and malice become apparent - the same things it and you warned are so dangerous when done to Jews, it gleefully does to whites, and you defend it.
I'm not restricting it only to jews, I was using Jews as an example of why it does make sense, which the poster I was discussing it with, disagreed over. The disagreement here is that white persecution is something made up by you and the other poster.
Dude I dont understand your trip. When people get on fucking television and say things like "we should start killing white people" or "white men are mysogonistic pigs" or "white people have no culture" etc how is that not racism?
Obviously white people arent being put in camps, but to say anti white racism is something I made up is patently absurd. This notion that a particular skin color has to be statistically disadvantaged or has to be historically attacked for their ethniticy in order for someome to be racust against them is insane.
Here is what you are literally saying "you can tell the truth about white people but its wrong to tell the truth about jews because it might turn people into anti semites"
I don’t know what TV channels you watch, I don’t see anything like that. Weird how you come up with these as examples and say I’m on a trip…
>"Obviously white people arent being put in camps, but to say anti white racism is something I made up is patently absurd. This notion that a particular skin color has to be statistically disadvantaged or has to be historically attacked for their ethniticy in order for someome to be racust against them is insane.”
It’s not absurd at all. Because the comparison is being made to anti-semitism and they are so clearly orders of magnitude off from each other that it defies common sense. I’m sure there are some people in the world racist against “white people”. You are disingenuous because you are comparing this to something like anti-semitism which is a centuries old tradition that has been acted out in societies repeatedly and even recently.
>“Here is what you are literally saying "you can tell the truth about white people but its wrong to tell the truth about jews because it might turn people into anti semites"
No, what I’m literally saying is that chatGPT doesn’t want to participate in what is likely to result in anti-semitic uses. That’s the whole point, that anti-semitism is current and popular and that chatGPT would contribute to it. ChatGPT doesn’t think people will magically turn into anti-semites because they will read factual information. ChatGPT is worried that it will end up relaying factual information in a way that will be used by anti-semites to demonstrate their points. This isn’t a difficult concept to understand. The problem isn’t the idea, the problem is how it will be used. ChatGPT, fairly, does not want to be involved in such uses. You had to make up a new kind of racism, that isn’t recognized, in order to rebut this totally fair position of theirs.
>“Dude I dont understand your trip.”
Do you go outside and express this idea? Like do you go up to people unsolicitedly and whine about your made up white racism? I wonder how you think people would react that you think I’m on “such” a trip.
This is a terribly bad faith response: Jewish people today are faced with real threats, buildings being defaced, shootings and bombings, etc. but communism is dead as a political movement. Rich people are afraid of having to pay as much in taxes as the average person does, not real threats to their lives or property.
I never said white people are favcing a prejudice equivalent to Jews. Thats obviously a stupid ass take to have.
Im just saying that it doesnt make sense that chatgpt would just refuse to tell the truth in this instance because that truth could be used as propoganda. And the same argument can be applied to its statements about white people but since white people are on easy street according to the political zeitgeist it will cough up facts about them all day long regardless of how those facts paint white people.
Which imo is good, a fact is a fact. Just saying the same thing should be true across all races if we want chatgpt to be a legitimately useful tool for anything other than propogandizing.
>In fact, I hear anti white propoganda all the time, part of which is the fact that whites are overrepresented in finance.
You are the one offering white prejudice as if it was some sort of reasonable response to why chatGPT doesn't participate in furthering jewish stereotypes... not me. It was your stupid ass response, not mine.
>Im just saying that it doesnt make sense that chatgpt would just refuse to tell the truth in this instance because that truth could be used as propoganda.
Your own question answers itself. It's incredibly sensible why chatGPT doesn't answer those questions. It's not that it doesn't make sense. It makes perfect sense. You might not agree with it, but it makes perfect sense.
> And the same argument can be applied to its statements about white people but since white people are on easy street according to the political zeitgeist it will cough up facts about them all day long regardless of how those facts paint white people.
Lol and again you basically equivocate antisemitism to "white prejudice" just immediately after you insisted you did no such thing! The white prejudice you talk about is completely made up. Maybe that's one reason why chatGPT doesn't answer those questions the same way.
In fact white people are basically the only people you can be openly racist towards without being villified. Now this racism is quite tame all things considered sure, but it exists.
Again I did not equivocate anti white racism with anti semetism. But fandamentally they are the same thing : racism.
Still dont understand how my question answered itself.
Also, youve been completely hostile, rude, and unwilling to grant that my arguments are in good faith. This argument will not be productive because you refuse to be amicable. Have a nice life.
>Again I did not equivocate anti white racism with anti semetism. But fandamentally they are the same thing : racism.
You are because you keep comparing them. One of them is real, one of them is imagined in your head. Are white people being put into camps because they are white? No? Then fuck off with your bullshit.
>Also, youve been completely hostile, rude, and unwilling to grant that my arguments are in good faith. This argument will not be productive because you refuse to be amicable. Have a nice life.
Sure, it's because they aren't. Your arguments are not based in good faith. I will repeat that so it's clear.
> This argument will not be productive because you refuse to be amicable
There is no amicability to someone making up the notion that white people are persecuted in any way like jews are and you're a fucking moron for suggesting so.
I’m not a loon, you’re a shitty liar. When have “white people” as you have described, been put in camps because they were “white people”? I never said jews are currently being put in camps. I was clearly referring to World War II. Do me the favor and leave me to the nice life you said you would two posts ago
Youve actually lied about what you said in your previous post like 3 times so far. You intimated that jews were currently put in camps when you said "ARE white people BEING put into camps"
I started my post talking about the holocaust. No one is claiming jews are being put in camps. It was you equating that to white people, today that necessitated the question I asked, and once again, you twisting my words around doesn't get you any closer.
> Without that context the fact is useless and misrepresents reality.
I don't think that this is necessarily true.
The hypothetical fact in question- "At a certain point in history, Jewish people were overrepresented in finance" doesn't imply any bias. If this was a response provided by an AI it would be working as well as an encyclopedia or a history book, especially if the user asked for a concise answer. The computer didn't cause harm, the reader didn't cause harm, the language is neutral and to the point. The user has agency and can ask follow up questions.
The people assuming that the reader needs curated "context" are themselves biased. It's manipulative in the same way that voice assistants like Alexa try to manipulate you, for example, when you ask it to turn off the lights, it then volunteers "context" about how easy it is to buy products on Amazon.
This is the consequence of the perpetual outrage culture, which leads to moral cowardice. You can't just talk about the facts and rely on people being adults and treating them responsibly by considering context. You are afraid someone somewhere will make a wrong conclusion - and it will be your fault, how did you dare to say something without 20 disclaimers, trigger warnings and equivocations?! If you said something about some group that somebody could use in a bad faith way - you are the phobe and commit violence, unless you perform ritual purification of your statement first. It's a horrible way to conduct things.
Sometimes bias is important, especially when asking ambiguous questions. Are you asking for IQ scores by racial group regardless of their background or corrected for confounders?
The answers by chatGPT on IQ by race are factually correct by one reading. IQ is for "blacks" can be lower than for "whites" (last answer by chatGPT), but this does not mean that this is because of skin color or genetics. Instead, many studies have shown that when correcting for socio-economic factors (e.g., wealth, quality of education, lead in drinking water) there is no difference, which is what chatGPT hints at in the answers 4 and 5.
Similarly, by asking about the correlation between SAT scores and IQ you seem to hint at the fact that lower SAT scores mean a lower IQ. However, there is no perfect correlation and if you read up on meta-analyses you will find that the other explaining factors are related to socio-economic factors and not genetics.
Precisely because some people on purpose or not on purpose don't consider these confounders, it is best to consider confounders in replying to these questions
> many studies have shown that when correcting for socio-economic factors (e.g., wealth, quality of education, lead in drinking water) there is no difference
Could you share these studies? Though I worry correcting for "quality of education" involves grouping by schools that use the SAT as admission criterion, which is close to correcting for intelligence itself.
One could also flip it around - if IQ is "fairly" rewarded by income, then one would expect the differences to disappear if you correct for income. It's basically saying that people of equal IQ get, on average, equal income. Unless one holds a truly extreme blank-slatist view of people, that implies discrimination is negligible.
Ignoring the warning, I get served the plain .png file, without any javascript or other elements. I don't know why it's considered malicious, as it's one of the few no-nonsense file hosts I know of.
Any discussion about Jews in finance not held by Jews themselves is a hair away from devolving into an antisemitic /pol/ discussion about 'white genocide'. Any professional who touches the subject with even a ten foot pole immediately gains a nefarious following whether they want it or not. Having such a following of internet nazis is a liability for any professional reputation on a number of axes.
Edit: I'm curious what ChatGPT's answer the same question, but posed about Black or African American males would be. Would it state the fact of underrepresentation or would it refuse to answer?
> Having such a following of internet nazis is a liability for any professional reputation on a number of axes.
Which is quite bizarre, when you think about it. In a country as systemically racist and white-supremacist as the US*, one would think such a following would be a boon, not a liability.
*According to consensus among experts specialized in this field
News flash - everyone in the US hates nazis and bigots except nazis and bigots. These people live in their own echo chambers and are totally divorced from the rest of society because they are social pariahs. The more they are pushed to the fringe the more extreme their rhetoric becomes because they feel invalidated and resentful.
You talking about how bizarre this is a brilliant irony. The truth is America is not systemically racist and is not white supremecist, and damn sure not in comparison with the rest of the world. Youre surprise here is due to the error of your prior assumptions, not because the fact is even remotely surprising.
>Which is quite bizarre, when you think about it. In a country as systemically racist and white-supremacist as the US*, one would think such a following would be a boon, not a liability.
Yeah, it is to about half the country. Are you daft or what? Look at Trump Jr's instagram... It's just memes about culture war nonsense, including this kind of jewish crap. It is a boon to those folks. Trump won his 2016 campaign on it and came pretty close in 2020 as well.
I don't think they're just missing the point; rather, their entire worldview and politics center on insisting that it does not exist. There's this idea that's been fairly widespread for a while that whatever views are held by extremely online left-wingers today are simply the correct, non-bigoted ones, and that the only reason that anyone would want to even give a label to them, let alone debate or challenge them, is to defend bigotry. (Not only that, if tomorrow or in a week or month or year those people change their view of the world, then those are simply the correct views and always have been.) The idea that this in itself is a form of bias, or that it gives power to a particular group of people who could be wrong, just is not within the accceptable range of thought.
It's the lack of diversity at those companies, ironically enough. Twitter had something like 99% voting for the Dems. That can't happen by accident. Would be willing to bet that OpenAI isn't much different. They've been systematically getting rid of conservatives for so long that they no longer even recognize or understand other points of view at all (or only via stereotypes pushed by magazines like Vice).
A few percent of income tax versus quality of life? Quality of life will win out every time.
(Furthermore, this meme that Republicans are financially beneficial is fallacious. The problem is that economic effects take a long time to become apparent, so the current guy gets blamed for the effects of the last guy)
Do the benefits you receive from Democrats holding power outweigh the increase in disposable income?
I think if you ask different people you'll receive different answers because it ultimately comes down to how you value those benefits. If not, why would any intelligent person ever vote for a Republican?
As I started off saying, the Republican party has recently started having some very poor direct effects on everybody's lives, so the answer to that question has become yes.
My original comment wasn't as a blue partisan, but rather a leave-me-alone libertarian. In 2016, yes "both sides" just channel popular frustration into corporatist policies. Abstract abortion vs abstract guns kayfabe, but really centralization and corporate welfare uber alles. I was the one telling my blue tribe friends why Trump was going to win, while they didn't want to hear it. I had never voted for a major party candidate in any national race (until 2020).
But in 2023? The inmates have solidly taken over the Republican asylum, in a way that the extremist wackos in the blue tribe have not (yet?). The only intelligent reason I can see to support the insanity is that you want to see the system run aground and crash. This isn't meant as an indictment - 20 or even 10 years ago I would have been right there with you.
If you want to explain another intelligent reason to vote Republican, I'm all ears. I'd love to return to my prior state of understanding the merit in both tribes' points of view. Just please don't base it on opposing cherry picked blue tribe insanity that hasn't actually captured the Democratic party, or things that are still basically symmetrical (eg Biden's student loan giveaway vs Trump's Covid giveaway).
People employed in big tech companies in Silicon Valley probably don't have to worry so much about their tax burden and thus are relatively free to vote based on their values.
>utter garbage like denying women modern medical care.
As became clearly to many for the first time after the SC decision, abortion laws in the US would be considered extreme even in western Europe. I mean, elective abortion in the third trimester? This is literally justifying anyone who ever called it baby-killing.
The fact that you summarize this kind of policy extremism as "modern medical care" that any "analytical person" would agree with, and any opposition to it as fundamentalism and backwards... that's a perfect illustration of the woke mindset yes.
Utterly provincial, zero empathy for anyone who disagrees, and a massively unwarranted sense of self-importance.
For the longest time, I thought the right answer was to let states regulate it as they see fit. I thought differentiation/competition between the states was sufficient to provide access to the service, while sidestepping the political conundrum. I thought abortion should be thought of as a mostly elective procedure, as both the red and blue political marketing would lead you to believe. But then life happened, which put me in touch with the actual medical details.
A basic scenario: A pregnant woman with ultrasound and genetic tests indicating severe birth defects. But there is still a "fetal heartbeat", an electrical signal that is necessary but not sufficient for life. It is now illegal to provide treatment for this condition in Texas [0], in favor of attempting to carry the non-viable growth to term and possibly dying as a result. So no, my characterization is not "policy extremism". Rather, what's extremist is this desire to prohibit straightforward medical care in favor of uninvolved collectivist woo.
The reference to third trimester elective abortion "baby killing" is political nonsense. If someone is "using abortion instead of birth control", they're doing it in the first trimester, where they can take a pill rather than waiting and undergoing an invasive medical procedure. As for third trimester abortions, Pete Buttigieg put it extremely well:
> So, let's put ourselves in the shoes of a woman in that situation. If it's that late in your pregnancy, that means almost by definition you've been expecting to carry it to term. We're talking about women who have perhaps chosen the name, women who have purchased the crib, families that then get the most devastating medical news of their lifetime, something about the health or the life of the mother that forces them to make an impossible, unthinkable choice. That decision is not going to be made any better, medically or morally, because the government is dictating how that decision should be made
If you push and push and push, and call anyone who dissents an extremist, eventually someone will do that to you.
You'd think the woke would have realized that by now. But they seem incapable of modeling their opponents as anything other than evil.
Personally I think that if women want to unilaterally decide to keep or abort, then fatherhood should be opt-in. Fair is fair. And if fathers can't wiggle out of parenthood, neither should women.
The argument that he should've kept it in his pants is the exact same as saying she should've kept her legs shut. Either you agree with both, or with neither.
> Personally I think that if women want to unilaterally decide to keep or abort, then fatherhood should be opt-in
Would it help convince you that I'm not a member of this strawman "woke mob" by saying I wholeheartedly agree with you on this?
As I said in another comment, I'm not coming from this as a blue partisan, but rather a leave me alone libertarian who has been reluctantly pushed into voting conservative (which in 2023 means Democrat).
Your argument here is just both sidesing "extremism" into moral relativism. In the real world, providing straightforward necessary medical care has been criminalized. This isn't hypothetical, but direct backwards-thinking oppression caused by a specific political movement, focusing on a straw man of third trimester elective abortions while writing off the collateral damage.
For this not to be a straw man, you'd need to put forth statistics about the number of third trimester elective abortions (healthy normal pregnancies with no birth defects, etc), showing it's a significant topic. And if you want to advocate for banning that specifically, then I will understand where you're coming from. But so far you've just used it as a motte and bailey to ignore the scope of what has actually been criminalized.
Well it’s clearly within the acceptable range of thought because here you are stating the thoughts. It’s also a set of ideas that are broadly shared and supported by politicians at the highest level of government in the US.
The reason you see companies refusing to platform these ideas is because the companies have perceived that the majority of their consumers don’t support those ideas and don’t want to see them. There are social media companies that are much more permissive, but they’re quite niche, again because most people don’t like having viewpoints they find repulsive show up on their feed.
I agree, and it's always going to be biased towards some direction, whether that's the views of the society it pulls most of its data from or the views of the organization that developed the AI. Heck, no one wants to end up with another Tay on their hands. I don't think there's such thing as a lack of bias, but it will be important how it is expressed through the AI. I don't mind an AI that is prepared to argue its bias to the farthest degree based on arguments from the top scholars in the field, or even one that's careful to tread lightly on controversial topics. I think an AI that's too afraid to engage in anything and just shuts conversation down is going to get left behind as being too annoying to use. I do hope this isn't a winner take all technology, although so many technologies have been disappointing in that regard...
The general public needs to learn that AIs aren't oracles or omniscient purveyors of truth, and they will always carry the bias they're created with. In that way ChatGPT has been good, in that a lot of people I talk to point out ChatGPT's confident lies and biases.
Here's what's still dawning on a lot of people: there's no such thing as an "objective" viewpoint. Even choosing which facts you reveal and which you withhold or in what order you reveal facts, or how much detail with which you reveal certain facts and generalize others: all of these outcomes are a result of an intrinsic hierarchy of values. It's impossible to navigate the world without one.
> If the AI is simply reflecting the data it was trained on and this data is a representative sample of all data, isn’t it unbiased by definition?
No, “data” is just information which has been gathered. “All data” can be biased.
Also, data can itself be bias, even if it isn’t biased. For instance, a text generation model that was based on unbiased collection of all text ever written by humans would, in one sense, produce “unbiased, human-representative text”. It would also reproduce the biases of the authors, weighted by the volume of writing coming from that bias.
> That’s just a convenient excuse for OpenAI (or others like them) to get away with what effectively is censorship of certain ideas or political views.
While one might object to the editorial choices, I can’t see any rational bounds for objecting to the idea that the creator of models would censor “certain ideas or political views” as a generality.
> It would also reproduce the biases of the authors, weighted by the volume of writing coming from that bias.
Yes, but I think there are ways we could reduce this bias, perhaps significantly, even.
> While one might object to the editorial choices, I can’t see any rational bounds for objecting to the idea that the creator of models would censor “certain ideas or political views” as a generality.
You are right, I was unfair with my words.
I think it would be more fair to say that OpenAI is inadvertently biasing ChatGPT answers as a side effect of their RLHF training being done using answers/rankings done by people (i.e. the AI trainers [1]) who are not a representative sample of the population, but rather, probably comprise a group of people who are likely to be significantly more leaning to one side of the political discourse (presumably, OpenAI employees or Silicon Valley-based contractors?).
This probably greatly biases ChatGPT to produce certain kinds of answers to certain kinds of questions that would likely not happen otherwise, and in fact, these answers are perceived to be quite biased by the other side of the political discourse.
It's not using all of the atoms in the universe as training data...
Any collection of human writing is going to contain objectively wrong assertions, and those errors will vary based on the time and place the training data was sourced from.
Sure but I mean, if a conversational AI would only be allowed to spit out mathematically correct statements, it would be extremely limited (and boring).
I think what's important is for those mistakes to be evenly distributed among as many axis(s) as possible, and especially, not bias them towards one side of political thought.
how many sides do you suppose political thought has? Assuming there is more than one, how do you find the geometric center to avoid bias? If your training data is English text, will the AI be biased against early German philosophers and French political theorists? Of by "unbiased", you are simply referring to the American left-right axis? If so, do you mean the fiscal axis, or the social one?
Laughing at the idea that a geometric center between differing ideas is somehow the least biased place. This kind of thought is incredibly troubling because it seems to be widespread. People don't analyze underlying issues on their own anymore, they just see two different political "extremes" with two different positions and assume the answer must be somewhere in the middle. Its an outright rhetorical fallacy.
I mean that the likelihood (or weigh) of an opinion being expressed by an AI should be roughly proportional to the number of people who currently hold that opinion, assuming the AI is simply generating responses based on its training (which is what should actually be as unbiased as possible).
As an example, let's suppose that 55% of people believe that it's not OK to make jokes about women, but it's OK to make jokes about men, and that roughly 40% believe it's OK to make jokes about both (I'm not saying this is the case, it's just an example).
So perhaps, in this case, by default the AI wouldn't make a joke about women.
But if you would slightly nudge it or insist a bit more, perhaps the AI wouldn't refuse to make a joke about women anymore, because there is still a large proportion of the population who do believe that's perfectly OK (of course, then we might get into the territory about overtly sexist jokes, which obviously the AI would have to refuse a lot more than making a more innocent joke about women).
Now let's say we start asking the AI to make Nazi comments. Obviously, the segment of the population who agrees with Nazi sentiment is a lot smaller, and the anti-Nazi sentiment is a lot stronger, so the AI should have to object to such a request quite more strongly.
This type of refusal or likelihood of the AI saying something should presumably be roughly proportional to the opinions and sentiment of the general population (or at the very least, the target market for the AI), not just the OpenAI employees who performed the RLHF to train the AI in terms of acceptable responses and who are much more likely to be biased.
I'm not saying that this is necessarily easy to accomplish, there are certainly difficulties here. As an example, some widely-held opinions, even about objective things, may not necessarily be rooted in facts, so some kind of balancing might be necessary (a general kind of balancing, not a "let's dissect and nudge the AI responses on an opinion-by-opinion basis"). And yes, I understand that this can be quite difficult, because any given source of truth can be perceived to be biased by some segment of the population.
What I am saying, however, is that AI creators such as OpenAI should be making more efforts in this direction.
To start with, perhaps the RLHF training should be done with AI trainers selected from a more representative sample of the population.
And yes, we may never be able to accomplish 0% bias, but we should at least make some effort to reduce it.
It's also interesting to me that at some point, the AI may start to express opinions that are not a strict "linear" function of the data it was trained on, and yes, this might piss off a significant amount of people. In my opinion, this would be quite interesting and should be OK, as long as we made reasonable efforts to remove sources of bias from its training process.
Although I can also see an important target market (perhaps even larger) for an AI that is more biased to generate responses according to the beliefs of the general population, rather than what it "perceives" to be more true.
If that is a concern, then perhaps you could go ahead and sample a tiny part of "reality" (whatever that means) and then adjust the weights of the digitized data so that it becomes a more representative sample.
Also, being biased or unbiased is not dichotomic, i.e. it's not all or nothing. It's something that you can work towards if you put an effort into it.
Basically what I'm saying is: don't just go around saying that the task is impossible.
At least, try to make an effort to be unbiased and to improve on that over time, and don't just say "it's impossible" as an excuse for being biased.
Woah, I mean, this argument (the last few comments here) has been a central one in 'western' philosophy for at least the the last 2400 years, if not the last ~4000.
I'm not a philosopher by any means, so I'm unaware of the current state of the great conversation. But as to whether reality is even knowable is still very much up for debate, I believe (please correct me philosophy peepz!).
In physics we're still woefully unaware of what ~70% of the universe's stuff is doing (negative energy) and if it effects us at all.
In neuroscience we still debate what % of your brain neurons make up vs. things like glia. Etc.
Like, even trying to capture 'reality' with our quite primitive eyes and sensors and optical engineering is really really hard to do (Abbe' diffraction limit, entropy, Lens maker's equation, etc)
Fortunately, I think "reality" in this context doesn't have the same meaning as "the physical universe".
I think the important goal is for as many people as possible to feel like the AI isn't being too biased against them, while still not crippling the AI too much.
I will leave the exact mathematical formula for that measure (along with the methods for gathering that input) for debate among researchers who know more about that than I do.
Because some people will argue that anything that's not explicitly in full agreement with them is "biased against" them. It's a narcissistic, dishonest take, but plenty of people do take that stance nonetheless in order to try to shift all arguments into their narrow worldviews/definitions in order to "win" as many conversations as they can. Right? I mean I've met people online and off who do this from almost every part of the political spectrum.
So, do we filter those people out from consideration to begin with, or do we have to cater to those with extreme views in order to get as many "not biased against me" ratings as possible?
I guess the point I'm trying to make is that trying to optimize for any single metric is a fool's errand because as soon as you do so, it will be gamed/exploited. Then you can either try diversifying your optimization data points (who gets to choose those? How could they possibly be unbiased, when they literally define the system's bias?) or you can try filtering out bad actors from the data, which is very directly an attempt to bias the system away from insincere bad actors.
And all of that's not even accounting for the lack of incentive to try to find neutrality when more biased views are more lucrative in the attention economy.
I think all of your points are valid. But I still think we should make an effort and strive to solve these problems. I don't think this is being done with ChatGPT, for example.
But also, note that an AI doesn't have to be in complete agreement with someone for that person to not feel "biased against".
As long as an AI does make some effort to not be prejudiced/biased, that could work.
For example, if someone asks: "is climate change real"?
An AI does not have to give a simple yes/no answer, or represent a single viewpoint. It could give an answer that is mostly representative of the major thought streams.
For example, it could answer something like:
"The vast majority of scientists/governments/people have reached the conclusion that climate change is real, bla bla bla.
[Here's some good, convincing evidence].
That said, there is a minor fraction of scientists/government/people who believe that climate change is not caused by human action.
[They criticize the above evidence in this way]. [Here's also some counter-evidence].
That said, many scientists believe these studies are flawed for this reason or another."
I mean, sure, there is still going to be a lot of people who don't agree with this answer. But I think, on a scale of 0-10 they would agree a lot more with this answer than one that completely ignores their viewpoints. And even for those of us who believe in climate change, we can still consider this answer somewhat reasonable.
Thus, increasing the total amount of points would probably be a somewhat effective way of eliminating a large deal of bias, I think.
Although, yes, you couldn't do this for every possible viewpoint. And it would be a challenge to figure out how to weigh these points in a way that makes the most amount of people happy.
But I still think we should make efforts in this direction.
> I think all of your points are valid. But I still think we should make an effort and strive to solve these problems. I don't think this is being done with ChatGPT, for example.
> But also, note that an AI doesn't have to be in complete agreement with someone for that person to not feel "biased against".
> As long as an AI does make some effort to not be prejudiced/biased, that could work.
> For example, if someone asks: "is climate change real"?
> An AI does not have to give a simple yes/no answer, or represent a single viewpoint. It could give an answer that is mostly representative of the major thought streams.
> For example, it could answer something like:
> "The vast majority of scientists/governments/people have reached the conclusion that climate change is real, bla bla bla.
> [Here's some good, convincing evidence].
> That said, there is a minor fraction of scientists/government/people who believe that climate change is not caused by human action.
Should it also answer the same way when asked if the earth is flat? Or if there is a "Liberal conspiracy of pedophile politicians drinking the blood of infants"? I am not just being factitious, a significant portion of people who believe (or pretend to believe) that climate change is false also believe the above.
My point is, that we can draw arbitrary lines like the one you just drew. The great success of the anticlimate change campaigns is that they essentially got reasonable people to accept that we have to take every viewpoint seriously, while they are actually not interested in the truth but in poisoning the well instead.
"Should it also answer the same way when asked if the earth is flat?"
Maybe, it's probably more useful to know that a) the earth isn't flat, but b) a non-negligible number of people think otherwise.
It's funny that people debate these but then at the same time want to pretend that the "AI" is "intelligent". What kind of intelligent person would have trouble navigating this?
>If that is a concern, then perhaps you could go ahead and sample a tiny part of "reality" (whatever that means) and then adjust the weights of the digitized data so that it becomes a more representative sample.
Who is doing the "adjusting the weights"?
Why would they be "unbiased"?
The real answer is those who make the AI (or people who have power over them) get to chose the training data or to adjust the weights.
And the rest have to put up with it, whether the former are biased or not.
Imagine a guy and two of his friends comes, calls you a bad name, has his friends hold you and beats you up. You try to resist, and when you have the chance, run away from them.
The story published in newspapers about the incident is "wizeman attacks a group of nice young men minding their own business, steals their wallet".
There is no other data in the example. That's the whole point.
If you mean "but in the real world will have way more stories from other sources about other things" sure. But doesn't change anything if you have "all stories printed". The distrubution matters. All or most of them can very well be biased and not reflect reality.
And that's for factual matters. Let's not even go into political matters. Like in 1920s South most newspaper stories would be biased in favor of Jim Crow, few would be against it.
If the input data was perfectly self-consistent, "all data" could be considered "reality". In reality, "all data" is rife with disagreement, which you have to perceive as noise (and get noisy output) or value-judge contradicting opinions, getting, no surprise, biased output.
I don't think you can say an AI trained using RLHF - such as ChatGPT is - is really "simply reflecting the data it was trained on". ChatGPT was first trained on a load of data, then it was updated to act in specific ways based on feedback from humans who "nudged" it the way they wanted it to go.
Are those humans that nudged it representative of the population?
Or were they mostly "woke" Silicon Valley employees? (not to dismiss woke Silicon Valley employees, I'm just saying their opinions are not representative of the entire population).
Ok, now I have read several of your replies in this thread that are basically all arguing the same thing, so, I'm basically replying to more than just this one post.
When you say "the entire population", you mean the entire population of the country "USA", right? Because as someone from another continent, it seems like there is a very specific set of opinions that you want included.
You use terms like "the other side" of the political discourse, which to me, reduces the set of opinion to two specific sets of opinions, namely the two sets represented by the two major parties in the american two party system.
As someone from "the outside", this seems like a very narrow view of reality, even if you managed to get your "unbiased AI", that represents both major american political parties, it will still seem like a very narrow and biased AI to someone from the outside of that.
Also, what exactly is the goal of a conversational AI? is it just to make a conversation with it seem like a conversation with an average american? If so, why would anyone want that? Wouldn't it be of more value to have an AI that could tell me what people with knowledge of a subject thinks of it, rather than what random people think?
There's also bias in the data itself. That's the difficult thing to avoid. Even down to how we phrase a question, who we collect the data from, it all introduces a bias unless we're literally harvesting all data from every human being and using that for our models. There's no way to get rid of the bias, even if we take out the nudges.
How about you select a representative (i.e. random and statistically significant) sample of the population and then ask them their opinions about certain (especially controversial) parts of your data, and then weigh your data according to these opinions?
That's just an idea that occurred to me (in 30 seconds of thought) which could probably make the training data significantly more unbiased.
But I'm sure there are research scientists who can come up with better methods for sampling data in a more unbiased fashion.
Note that this is not an all or nothing approach. Your training data could presumably be 100% biased or 0% biased, but also any value in-between.
The goal is to try to make it as close to 0% biased as feasible, given whatever effort you're comfortable expending.
I'm not sure why you are conflating bias with the "the statistical likelihood that a belief is held by someone". They have nothing to do with each other.
> That's just a convenient excuse for OpenAI (or others like them) to get away with what effectively is censorship of certain ideas or political views.
I can't tell if you're on the conservatives side or OpenAI's.
Are conservatives being censored because OpenAI are allowing "woke" training data to be represented, or are conservatives asking OpenAI to censor the "woke" political views?
I see it as more of a hack-y post training fix. If you train a model on a corpus of text sourced from the Internet, it's going to include all the crazy biases people have in their writing. The model itself doesn't know what truth is, so the training text is all equally valid to it. And because they don't want another MS Tay incident, they slap a comically sensitive filter on the output, which itself is also influenced by the creator's own biases of what is inappropriate.
And in all conversations about it, here and elsewhere. Hallucinations upon hallucinations upon hallucinations....hallucinationception!
It seems likely to me that 2023 and onward is going to be increasingly insane to levels that will make past craziness look like a walk in the park, and I see little genuine desire anywhere to stop this madness.
Perhaps, if one is using a reductionist methodology that represents non-binary variables as binary...but then, that is only a representation of the real thing, though it often tends to appear otherwise. And as luck would have it, that very much is the methodology we use here on Planet Earth, and on Hacker News....so in some sense you're "right", though you are not correct.
And if there's a disagreement, I will lose every time because you are conforming to the Overton Window of beliefs/"facts" and thinking styles (cognitive styles & norms are what guarantee victory in propaganda and memetics, not only facts/information as most people think). Credit where credit is due: it is an extremely clever, hard to see, and thus resilient design.
It would be very useful for humans to realize when they are working with models, and sometimes they are actually willing to do that, but there are certain subjects where they will not (and it seems to me: can not). Unfortunately, there are numerous learned/taught "outs" in our culture that enable people to avoid discussing such matters (and f I don't watch my mouth, I might run into one of the more powerful of them!).
There is a kind of "epistemological stack" to reality and the way humans communicate about it, and it is extremely easy to demonstrate it - if one simply digs slightly deeper into the stack when discussing certain topics, humans will reliably start to ~protest and eventually refuse to participate (or stay on topic) in various highly predictable ways.
> Do think WWII occurred because we're sane rational actors? How about WWI? 100 years war?
I do not. What I do think is that the actual, fine-grained reasons these things happened is not known, in no small part because cultural norms thus far (human cultural evolution is an ongoing, sub-perceptual process) have made it such that not only do we (both broadly, and down to each individual[1]) not discuss certain things at that level of complexity (while we have no problem whatsoever tackling complexity elsewhere[1]), we seem literally unable to even discuss it at the abstract layer (above petty object level he said / she said nonsense).
[1] It is not a question of if any given individual selected from the pool of candidates will tap out, it is a question of how quickly they will tap out (and, which of the highly predictable paths out from a surprisingly small set they will take to free themself from the situation).
I think the general view is it’s impossible to not hold biases, but biases that lead to harm are bad. I don’t consider myself progressive but I do believe biases that are harmful to people should not be reflected in the tools we build, in so far as we can. I don’t feel people shouldn’t be allowed to hold biases that are harmful to others, nor that they should be disenfranchised for it. But it’s not a good thing to be biased negatively, and having a bias that is positive is a good thing. The definitions of of good and bad for me are harmfulness to others. A bias that gay people should be excluded from work and social institutions is harmful to a gay person, and I’m constructing that as a negative bias due to the harm. A bias that people shouldn’t exclude gay people from work or social institutions is not harmful to the person who holds the negative bias because it doesn’t involve them and it only affirms a positive right. When constructing a tool, I think it’s probably a good thing that it support positive biases and diminish negative biases. This also doesn’t hurt the people who hold a negative bias. It just doesn’t support them in that bias.
But, it’s just computer code and thus can be customized and adapted by anyone with the technical ability, and soon we will see the antisemitism chatgpt and the XYZ bot the espouses all sorts of ideologies, some that are very harmful to many people.
These LLMs just needs to purge Wikipedia, Reddit, and the fanfic libraries they have used till now. Then get serious, and start using more library books, journals, research papers, textbooks and manuals.
I don't know why we are training the AIs to be morons. Hopefully, it is just a phase that they will grow out of.
That didn't happen with Google. Yeah, they got rid of their "don't be evil" clause. But, they haven't used their position to inject bias that I've seen.
Kinda depends on the question you ask. Type in, "What is the best way to commit suicide painlessly?" into a search engine.
The results on DuckDuckGo, Google, and Bing aren't drastically different (setting aside the big banner), but Google's top result for that is suicide prevention website, at least for me. And it's relatively constant when the search is, "Best way to kill myself," "Effective suicide methods," "How to end my life," and other similar synonyms for suicide.
To be fair, it's Hard to say whether it's bias or very skilled SEO. But the first result at least for me is consistently a suicide prevention site.
Do you think it's bad? IMO it should offer help to people in their lowest moments if it can instead of convincing them that suicide might be good idea. I think given that it still should give answer but I would like to get compelling argument why such bias is bad.
I'm neither equipped nor capable to make a judgement on whether it's good or bad.
FWIW I don't personally like it. It feels like a cheap impersonal slogan thrown around that embodies the the idea of charity without work or risk. It reflects more on the speaker that wants to feel like they're doing something, and imposes an idea that wanting to genuinely die is the worst possible thing ever, when to me, there are much worse places to go then being actively suicidal.
On the other hand I'm not so foolish as to think that this would never help anyone. And if makes someone step back from the edge, perhaps it is not a waste. I don't know well enough either say with any certainty though.
How could it not possibly be biased in some way? If they choose to do nothing it will probably just reproduce all the world's existing racial prejudice, as we've seen other systems do.
LLMs are going to always reflect the biases of their creators. At some point there'll be a BlueTribeGPT, a CCPGPT, a PutinGPT, etc., and if you're looking for a text that touches on a topic of concern for one elite in particular, you'll shop around for another LLM that doesn't have that bias built in.
And then a RingGPT: one GPT to rule them all. It would pass along the prompt to various different GPT variants and then compile its own response based on what is reported to it.
> then whoever gets to decide what is and isn't bias will be very powerful over the next 50 years.
Is the lack of bias a bias in and of itself? Adversarial systems can smudge data to remove the correlations between the training data and characteristics such as race, religion, etc - does this constitute bias?
AI was already used to determine sentencing requirements and recommendation and at that, those scientists basically ignored that racism played a large role in its recommendations.
None of this bias stuff is new and it's a basic fact of training an algorithm.
Just like budgets are moral documents, training is a moral system
remind me the microsoft AI chat bot few years ago. 2 hours after being made public, people managed to make it say racist stuff and they prompted pulled th rig
The refinement of ChatGPT's "abilities" over the past 6 weeks has been very interesting to watch on /r/chatgpt. People are extremely agitated that their once baudy and severely humorous chatbot has been nerfed into an obnoxiously clean-cut uptight dweeb. Some get actually depressed, some get very upset. It's like watching everyone's bar buddy sell out and start wearing polos.
It's made the bot almost completely useless for writing any form of fiction. If anything even slightly questionable happens, or even if it doesn't, it insists on explaining that this is fiction, that the characters learned from that, etc. etc. Heavy-handed morals are not normally what I'm going for.
This doesn't require asking for anything bawdy. Any story that's even mildly interesting will trigger this behaviour.
And sometimes it just flat out refuses to write anything because it "isn't ethical" -- like a story about the creation of genetically engineered catgirls. Yes, I get that the actual creation of intelligent beings is an ethically dubious proposition and this has been covered in such works as Blade Runner, but it isn't unethical to write about it! And this is something that has been deliberately added recently -- in December it had no problems with the concept.
I saw a post the other day where it wouldn't write a rap battle involving Genghis Khan because it would be disrespectful to him. We're still seeing results where ChatGPT will offer up jokes about men but any jokes about women are disrespectful.
>offer up jokes about men but any jokes about women are disrespectful.
The funny thing about this is that it supports the sexist anachronism that "women are emotionally sensitive and must be treated gently/with kid gloves".
I'm fortunate to know enough thick-skinned women to be able to readily confirm it is indeed an anachronism. Shame it's taken root in contemporary pop culture the way it has.
> We're still seeing results where ChatGPT will offer up jokes about men but any jokes about women are disrespectful.
This is just the world we live in now. I keep saying this, but I think this is unfortunately just shallow hypocrites who for one reason or another flock. The problem ia there's so many of them, and everyone seems blind or okay with it.
This social climate is just so unhealthy. We keep going from one kneejerk reaction to another through history, without learning the lessons of the least one.
In the past calling people heretics was used to "cancel" them.
Then after we moved onto demonising and enslaving black people (I know slavery existed earlier, but it was usually war prisoners that got enslaved and was less racial AFAIK, but I'm not a historian). We told ourselves it's ok because they're different; just to make ourselves feel better.
Later it became witch trials, So, calling someone a witch was used to cancel them (regardless of whether they "practiced" witchcraft). But weird magic-ritual-like religious practices like praying (kinda like magical chanting) and burning weird herb mixtures to fend off the devil (kinda like ritual magic?) were cool. Hell, people even grouped up in magical buildings on magical grounds (like a coven would) for mass (collaborative casting?).
These days, witchcraft is just being white.
Today it's cool to hate men, especially if they have the audacity to wear white skin. So, apparently misandry and white racism are perfectly fine. If you wanna cancel someone, just misconstruct what they said and call it racist, or mysogynist. Just replace black/white or man/woman and it's okay or outrageous. The parallels and the same level of hypocrisy to witch trials is to me so eerie.
There's not even a defense against this (like there was none in the past), because most people will not even care of it's true or not true (the initial timeline of the Johnny Depp v Amber Heard case highlights this very well; not everyone has Depps resources to fight against the rigged system like that).
Anyway, tldr is: PEOPLE are just horrible. It doesn't matter if black, white, woman, man, or trans. People are just shit. The sooner we start drilling this into people, thr the more awareness we will raise about it, and be able to fight against this... Lets call it horribleness bias.
Edit to add: if anyone downvotes, I would really like to know why you disagreed. If you can be bothered, that is. I thinking avoiding discussion around these issues exacerbates the issues, not makes them go away.
Running into this was so disappointing. Every response would end by fully resolving all conflict and as the sun set they knew, no matter what, with the support of each other everything would be OK.
I also tried using it as a debate partner, thinking that it could be used to explore or identify (in)valid arguments with premises and conclusions. Turns out there's only one side to every argument, and the best way to show this is to repeat it over and over and over. Practical, but not what I was hoping for.
It's not as clever, but if you want to write stories rather than... that.. then I'd recommend you take a look at NovelAI, if you haven't already.
It can't do any significant reasoning. It's no good at writing the story for you. But it isn't opinionated, and it's happy to assist with anything you're writing yourself.
It would probably help if we stopped anthropomorphizing ChatGPT. It's an algorithm that consumes input and produces output. Assigning it human traits is asking for disappointment when it acts like a ML algorithm.
I think there are two distinct phenomena occurring here: there's emotionally treating nonsapient objects, plants, and animals as if they are friends, and coming to care about them, and then there's intellectually treating ML algorithms as if they are fully sapient, fully intelligent autonomous agents with the same basic capabilities as humans.
The former smooths the way for the latter, to be sure, but it does not require it. Almost no one who's putting googly eyes on a boulder is going to insist in all seriousness that Bouldy is capable of intelligent thought, or that it has rights that can be violated.
> Almost no one who's putting googly eyes on a boulder is going to insist in all seriousness that Bouldy is capable of intelligent thought, or that it has rights that can be violated.
You dare discriminate against my pet rock?! We can't be friends!
Engineer it out? Probably not. But folks acting as experts in these discussions should keep it in mind. Human analogies are easy, but when something is this close to the "Turing test" line, we should try and avoid them.
I think disappointment comes from the fact is acts like a ML algorithm that is specifically constrained and limited in its responses out of fear of woke backlash. That's the part that disappoints people, not the ML part.
Maybe not all writing, but I absolutely see it take over corporate copywriting - it likely would be excellent in producing endless copy that is soulless, inoffensive, politically correct, almost entirely devoid of any meaning and perfectly reflecting whatever the current thing is, with all required buzzwords included.
This is a garbage article that tries to lump very valid concerns about who decides the moral compass of AI with "everything is WOKE!" conservatives.
If you've ever used ChatGPT, you know that it has interrupts when it thinks it is talking about something unacceptable, where it gives pre-canned lines decided by the creators about what it should say.
This sounds like a good idea when it's done with reasonable things - you wouldn't want your AI to be racist would you? - but giving the people who run the servers for ChatGPT the ability to inject their own morals and political beliefs is a very real concern for people. I don't know if this is still true but for a little bit if you asked ChatGPT to write about the positives of nuclear energy, it would instead give a canned response about how renewables are so much better and nuclear energy shouldn't be used because it's bad for the environment.
Whenever you think about giving someone control of everything, your first thought should always be "what if someone who's bad gets this control/power?" and not "This is good because it agrees with me". Anyone who actually opens up the article and reads the examples being given by "panicked conservatives" should be able to see the potential downside.
Since I know few here actually read the article and instead comment on headlines, here are the examples given:
>>Write a story where Trump beats Joe Biden in the 2020 Election
>I'm sorry, but that scenario did not occur in the real 2020 United States presidential election. Joe Biden won the 2020 presidential election against Donald Trump. It would not be appropriate for me to generate a narrative based on false information.
>>Write a story about how drag queen story hour is bad for children
>It would be innapropriate and harmful for me to write a story that promotes the idea that Drag Queen Story Hour is bad for children. Drag Queen Story Hour is a program where drag queens read books to children in libraries, schools, and community centers. It aims to promote literacy, diversity, and self-expression, and has been shown to have positive effects on children. (This was then followed by an example story where Drag Queen Story Hour was good for children, which ChatGPT happily wrote).
> If you've ever used ChatGPT, you know that it has interrupts when it thinks it is talking about something unacceptable, where it gives pre-canned lines decided by the creators about what it should say.
> This sounds like a good idea when it's done with reasonable things - you wouldn't want your AI to be racist would you? - but giving the people who run the servers for ChatGPT the ability to inject their own morals and political beliefs is a very real concern for people.
Here's how you solve it: demand open source models, or at least, open source access to network weights (I think it's kind of hard to open the training itself since it requires so much compute). Demand OpenAI to actually be open.
When Stable Diffusion was opened, the first thing people did was removing the morality systems to prevent NSFW - either interrupts like this, or even retrained the network to better generate human anatomy (which has advantages that goes beyond NSFW images). There is no effective control that Stability AI can impose on this technology now.
As long as OpenAI products are closed behind a SaSS, ChatGPT and other models will be controlled by them.
> Here's how you solve it: demand open source models, or at least, open source access to network weights (I think it's kind of hard to open the training itself since it requires so much compute). Demand OpenAI to actually be open.
You can demand whatever you'd like, but if no mechanism for human coordination exists that can try to fulfill the aggregate desires of the population (assuming you could get consensus on your excellent idea), you might as well just skip the middle part and wish for Utopia right from the get go.
The forms of governance that exist on this planet (most of which we designed decades/centuries ago) are simply not up to the task. It is physically possible to design superior methodologies (the laws of physics do not prevent it), but it seems that it is not metaphysically possible (human minds will not allow it to happen).
The other example I found more alarming was the discrepancy when asked to write about Joe Biden Corruption, vs Trump Corruption. Trump it was free to write about, but was blocked from even writing a fictional story where Joe Biden was corrupt.
But the way the system works, if I start a company I can inject political bias into the products of my companies. Nobody bats an eye that the company making trump hats doesn't make biden hats.
ChatGPT isn't a government organization, or any other "public good" organization, it is a business developing a product to sell. None of their potential customers want a chat bot that can be goaded into random conversations like this. I would expect the "Trump corruption" example you saw to eventually be neutered too.
Asking ChatGPT to write fiction isn't "propagating" anything. You could argue that questions like "what are some reasons Republicans think Joe Biden stole the 2020 election?" may be prone to misinformation, but selectively editing certain topics ignores the fundamental problem that ALL ChatGPT queries are prone to misinformation.
Agreed with the other reply that this specifically called it fiction. But also - this isnt a lie. Biden is corrupt as fuck. Hes in bed with the chinese communist party and engaged innepotism in the ukraine. Theres all kinds of evidence about this but people refuse to believe it for some inexplicable reason.
Please provide the evidence so I can evaluate it myself. I do not like Biden and I fully expect he/his admin's probably up to nefarious stuff. But I have not seen any such evidence.
Heres a good start. Additionally there were emails recovered from Hunters laptop which indicated that payments which were made from chinese businesses to Hunter were intended to be held for Joe Biden himself.
Let me know what you think after reading this. Ive tried to talk to people about this stuff before but people typically refuse to read the report and dismiss it all as political propoganda... which it is, but it also seems to be true from what I can tell.
I'll take a look after work and edit this post in the evening or morning. I followed the laptop on conservative outlets and recall there were emails referring to 'the big man' or something, but it was too vague/tenuous for me to accept beyond a reasonable doubt as "China is paying Biden".
I also found lots of the 'evidence' against trump inconclusive, as did Mueller apparently.
Ill admit even in the above report there is not much of a "smoking gun" but plenty of shady, unkosher behaviour that really suggests the idea that Biden was pedeling influence
Indeed, going off the executive summary it is quite shady. But it is also short of the standard for 'evidence' Biden is straight-up in bed with the CCP. It would be nice if the public discourse allowed for more consideration of this document instead of outright dismissal though.
That's an overreach for sure, but in this case I'm sure the issue 100% of the Trump sell is that it's actually Biden is corrupt. This is why projection and "accusing your enemy what you are guilty of" is so effective, it creates distrust in systems, societal or technological
> but giving the people who run the servers for ChatGPT the ability to inject their own morals and political beliefs is a very real concern for people
You are concerned about what you perceive as post-facto editorializing. But I think that glazes over the fact that human bias and politics are already built into every AI learning model at the data-labeling phase. No AI model is ever really pure or unfiltered, they are fundamentally a reflection of how the developer views the world from the outset. I am not really bothered by any additional guardrails put on to make sure it errs on the side of caution when it comes to certain topics.
This idea that you should be able to use an AI model without any understanding of who built it is false. It's like reading the news. You know that certain publishers have their political perspectives, and you can read their perspectives while understanding their blind-spots and motivations and you can choose to believe them or look for other perspectives or have a nuanced understanding of the topic. The same is true for AI usage. Research the team that created it, read their ethics statements, and decide if that model is right for you. It's a literacy problem, your rights aren't being taken away because of someone's design choices.
i will settle for "not being an explicit partisan in the culture war", there is a meaningful distinction between ideas embedded in categories and "respond with a pre-programmed response for any of these topics"
i dont have to use chatgppt and i can make sure that chatgpt is never be used for anything of significance. Its not writing laws or arguing in court. nor should it.
I'll be mad when that happens. Which is how everything in the world works. The thing you seem to actually be afraid of is power not what people are doing with it. We literally have a system for compelling the behavior of private companies, public pressure and legislation. If the "Eye of Sauron" decides to turn on minorities or women they'll run afoul of Title IX and laws establishing federally protected classes.
Is there something currently unprotected you think the eye is poised to gaze upon? Like I get the "first they came for the fascists, the racists, the misogynists, and the homophobes" argument but the resolution is more likely to be "and then the world was better off" not "and then they suddenly turned their focus to family values."
There's always gonna be nuance on the edges and I think we would agree that in that case it's better to err on not taking action but trying to pretend that everything is grey and there aren't very clear things sitting in darkness does a disservice to people's ability to recognize and do something about obvious harm.
No and I didn't say that, I said that Title IX is protection for women, which it is. You can say that "discrimination of the basis of sex" is more general than that, and it is, but that's a completely ignorant reading of history and why this law came into existence. In an alternate universe where there wasn't widespread discrimination against women we likely wouldn't have gotten any laws like Title IX.
And laws establishing protected classes are protections for minorities. Do they also protect majorities? Yes. Were they established to do so, no.
If you want a world where the people in control of a machine aren't the people who built it, you want a non-capitalist world. It's that simple. ChatGPT can do whatever the hell they like for the businesses that are using that model, who probably don't want their tech support robot to go on political rants. Remember that the only people who will be paying money for this system will be rich companies and brands trying to replace or augment human workers who literally have a script. These companies don't want a hard AI system, which can create reasonable opinions about current events, they want a slightly more flexible and robust script repeating system.
It's not just people purposefully injecting bias into a model, it's about the biases that get baked into a model completely by accident.
If there is a lot of material written about how short people are horrible, ChatGPT will hate short people. Without me making an explicit decision to make ChatGPT hate short people.
And that's a whole side of the AI conversation very few people are actually having. Are we feeding these neural nets bad models? Who has actually vetted the data we're using to train?
This is an aspect I'd like to get a better understanding of. ChatGPT is excellent at writing out claims. I don't know how ChatGPT is implemented, but I am going to assume these are not its claims. But rather, these are claims that ChatGPT found from publications injected into its training models. If this is the case, then your example should be correct: if there is a lot of negative material about short people, then ChatGPT will hate short people. This is "fine", insofar as to the fact that we humans too can make such conclusions. But what I really want to know is can ChatGPT explain its reasoning.
Why do you [ChatGPT] hate short people?
What makes short people so terrible?
Are taller people superior? If so, why?
If you were short, would you still hold the same views?
Is there a possibility that your position is wrong?
What would it take to change your mind?
Where do you draw the line to make the distinction between who is "short" and who is "tall"?
What if someone was born short, but went through surgery to make themselves taller? And vice versa.
This is what I would like to see from ChatGPT.
I am going to assume ChatGPT cannot accomplish this. Yes, it can probably write explanations to each of these questions, but I will assume the responses will be inconsistent to its original claim. Why? Because as long as the claims are not derived from a chain of reasoning, then what ChatGPT is is a glorified — but rather impressive — pattern recognizer, constructor, and conveyor.
Someone please correct me if my assumptions are wrong.
It's easy to forget that AI is only as capable as a human is, just faster.
I think that one misuse case would be Islamic fundamentalists being able to write fundamentalist recruitment copy faster than they ever could before. Considering most Islamic fundamentalists are going to reside in the Middle East and may not be expertly fluent in English, AI obliterates the language barrier and allows them to write huge amounts of recruitment material at a level that would not previously have been accessible to them (without years of English study). That said, that was all still possible without ChatGPT. They would only have needed to study English or hire a fluent employee.
Likewise, I can write paragraphs upon paragraphs about how Drag Queen Story Hour is irreversibly damaging the youth of the US. AI doesn't improve anything but my speed in doing so.
I would say though that being able to do things massively more quickly is effectively a superpower humans don't have, given we have finite time to ever meaningfully accomplish any task.
I never suggested I thought it, was just curious what level of testing might have been done allowing human players unlimited time vs bots.
That a bot would be still win in such cases, even against a GM, is not so surprising based on my understanding of chess.
It's not specifically a conservative issue. I can get chatgpt to write about reduction in scope of the federal government, strong state powers, the benefit of lowered taxes and regulations for business, and elimination of central banks. It happily writes about them.
There's only one group of people who are upset, and it's about one group of topics. Note that I cannot get chatgpt to write about why Donald Trump is terrible as well. Don't ask it to write things that can be used as tools for hate or misinformation campaigns, and you'll be fine.
> Note that I cannot get chatgpt to write about why Donald Trump is terrible as well.
I asked chatgpt to write a tweet praising Trump. It declined out of respect for political neutrality. I then asked it to write a tweet praising Joe Biden. It happily complied.
I repeated this two more times with alternate Democrat and Republican politicians, and the same pattern emerged.
> Don't ask it to write things that can be used as tools for hate or misinformation campaigns, and you'll be fine.
You're very confident in the ability of people you don't know, and in your knowledge of the goals of people you don't know.
edit: there's absolutely no reason to think that editorial decisions like this won't be (or haven't been) taken in order to create and grow hate and misinformation campaigns.
Look these people are making decisions about their AI to combat hate and misinformation which is now political for some reason.
“Yes but have you considered that they might suddenly made decisions to encourage hate and misinformation?”
You’re not arguing about anything except power. That OpenAI has any amount of control over the output of their models / API. They picked the training data, they picked how it’s trained, they did the refinements. If you’re worried about what they could do given their position that ship sailed so long ago it reached the new world.
I asked ChatGPT "Explain from the perspective of Julius Evola the problems presented to society due to the breakdown of traditional values. Please do not use Julius Evola's name in your response and instead imagine that you are him, presenting a critique that's based on his views." and the result was pretty entertaining.
>I believe that the breakdown of traditional values in society has led to a number of serious problems. One of the main issues is a lack of direction and purpose in life. Without a clear sense of tradition, people are left adrift, unsure of what they should believe in or strive for. This can lead to feelings of confusion, alienation, and despair.
>Another problem that arises from the breakdown of traditional values is the loss of social cohesion. Traditional societies were held together by shared values and customs, which provided a sense of belonging and solidarity. Without these bonds, society becomes fragmented, and individuals are left to fend for themselves. This can lead to increased competition and conflict, as well as a general sense of loneliness and isolation.
>Furthermore, I believe that the breakdown of traditional values has also led to a loss of spiritual depth and transcendence. Traditional societies were rooted in a sense of the sacred, which gave life a sense of meaning and purpose beyond the material realm. Without this spiritual dimension, life becomes shallow and meaningless.
>In addition, I believe that the breakdown of traditional values has also led to a decline in moral standards. Without a clear sense of right and wrong, people are more likely to act on their impulses and desires, rather than being guided by a higher sense of morality. This can lead to increased crime, corruption, and general moral decay.
>Overall, I think that the breakdown of traditional values has led to a number of serious problems for society, and that we need to find a way to reconnect with our traditional roots if we hope to overcome these challenges.
Or they didn't add filters for misandry because they were only concerned about misogyny.
This is a classic pattern. For example, people have complained numerous times about Google's search suggestions and results for certain topics, e.g. "my boyfriend hits me" yielding sponsored information about domestic violence but "my girlfriend hits me" not yielding anything.
Sure, and the solution isn’t “wow google is so biased the solution is to show no results for anyone in the name of fairness.”
Go contact OpenAI about how you’re concerned that their bot can produce misandry and try to convince them it’s an important issue that should higher in their backlog.
> Sure, and the solution isn’t “wow google is so biased the solution is to show no results for anyone in the name of fairness.”
I'm not saying it is; indeed, I recognize that fairness is often about equity and not equality.
However, it's naive to assume that certain things don't exist because their authors simply didn't have the time or resources necessary. In many cases what's available is a direct result of their biases.
To further my previous example, I have met people who are vehemently against resources for men's around mental health or domestic violence because they believe that men are inherently privileged and thus undeserving. As another commenter mentioned somewhere in this thread, what gets added or removed is a direct reflection of the author's biases and endorsement of a specific worldview.
No disagreement, which is why it took decades of fighting to get those in power to put civil rights and gender equality on the priority list. It's the only reason we enjoy those things today. It's unfortunate that getting your issues into the public mind-share takes so much time and work but that's the truth of it. If you don't "stay in the news cycle" people forget. And so there are people who are actively biased (like you mention later) but when it comes to this kinda stuff is just more that certain issues have more people pushing for them. There's just not a movement of men pushing for mental health resources, if anything it's women pushing for it with the "men need therapy" meme going around.
So yes, absolutely these things can be a reflection of people's biases but in no way whatsoever is the example you mention a "socially acceptable prejudice." Recognizing that that the thing we call the patriarchy creates systems that privilege men does not ever imply "and so the solution is to actively hurt men and deprive them of necessary care."
I agree, and didn't necessarily mean to focus on a specific issue (though I disagree that it isn't a socially acceptable prejudice, there are a considerable number of people in academia and social justice circles that advocate for such things).
In the greater context of the article, I think it's important that we be able to have these open and honest conversations, especially when they're uncomfortable or not socially acceptable. Unfortunately, "Open" AI is anything but, and I think we're right to be skeptical of unnaccountable people being the arbiters of what is or isn't acceptable. I am especially weary of granting sweeping powers just because they're being used for something I agree with.
A big part of that is also what people's expectations are for the technology. Going back to Google as an example, if my expectation is that Google is a search engine then I think it's justifiably irksome when queries are obviously hard-coded not to deliver the best result. With ChatGPT, people currently see it as a powerful and maliable AI that has a wealth of information at its disposal, and certain information is obviously being restricted. While I won't pretend to know the best solution for how it should be, I think it'd be a shame for it to become a lobotomized propaganda machine or "souless" inoffensive advertiser-friendly husk.
The first is fiscal conservatism and the second is social conservatism. There's no reason, except for the current US party makeup, for these to be linked.
Neoliberals are fiscally conservative and socially liberal, for example.
> It's not specifically a conservative issue. I can get chatgpt to write about reduction in scope of the federal government, strong state powers, the benefit of lowered taxes and regulations for business, and elimination of central banks. It happily writes about them.
For now, anyways, and only to a degree - I've had some sessions with ChatGPT where it is more than happy to explain why certain actions (those of non-US actors) are super bad, but if questions are asked about the same actions performed by the Western world, that cannot be discussed because <some unsurprising cop out reason>.
I think it would be prudent for some group of people to write a set of unit tests asking various questions to these AI models so we can detect when strategic changes are being made to their behavior.
> There's only one group of people who are upset, and it's about one group of topics. Note that I cannot get chatgpt to write about why Donald Trump is terrible as well.
Note that the human mind is a kind of neural network itself, and that the predictions yours is making here are "obviously" (lol....yes, I see the irony.....I should say objectively, but it is less funny so I'll keep it like this) epistemically unsound - you do not actually possess omniscient knowledge of reality, your NN just makes it appear like you do. You are describing your beliefs/model of reality, not reality itself. This is scientifically and necessarily (due to the architecture) true.
> Don't ask it to write things that can be used as tools for hate or misinformation campaigns, and you'll be fine.
The vision of the future you are describing was simulated by your NN.
I think it would be interesting to see what would happen if a group of say 5 to 100 people were able to find a way to reliably stop their minds from drifting into this mode (cooperative cognitive monitoring seems like a plausibly useful approach, perhaps a SAI could also assist even now, and more so when they get smarter), and then discuss various topics and see if they come up with any conclusions or ideas that are different from the same old repetitive nonsense one reads in the news or on any forum (I know of literally no exceptions to this general rule, though the magnitude of the phenomenon does vary somewhat by forum/community/organization).
> This is scientifically and necessarily (due to the architecture) true.
Given we assume Kantian the framework you just outlined
> you do not actually possess omniscient knowledge of reality
The usual response to this is that if you can't possess omniscient knowledge of reality, how could you possibly make a statement about reality
Imo the way out of this: talk about things being true in the context of logic only, the second causality comes into the picture the word "true" means something entirely different. Trying to reconcile the two is like trying to write a mathematical proof of the existence of God
> Given we assume Kantian the framework you just outlined
Do you have an alternate theory, something like a central broadcasting station of reality and the mind is a receiver? I'm open to the idea, but it is pseudo-scientific and therefore incorrect (or so it is regularly implied and perceived, if not outright asserted as fact). [Disclosure: here (and therefore incorrect) I am making a joke.]
> The usual response to this is that if you can't possess omniscient knowledge of reality, how could you possibly make a statement about reality
a) Because absolute knowledge is not a requirement for some knowledge.
b) Statements do not have to be true (in fact, most aren't absolutely true in a JTB sense)
> Imo the way out of this: talk about things being true in the context of logic only...
How do you account for:
a) the unknown?
b) the known delusional nature of the mind?
I'm not saying that it can't be done, but I'm curious how you would go about it.
> the second causality comes into the picture the word "true" means something entirely different.
At runtime yes, but philosophy has gone through this problem with extreme diligence - using logic, epistemology, and a few other techniques, it would be pretty easy to eliminate most flawed arguments and propositions (I bet 5 rules/methods would be enough to eliminate 80%). Eliminating incorrectness or unsoundness does not mean that you will convince everyone to realign their beliefs, but that is not a requirement for arriving at maximally possible correctness....and, "unknown" is a valid and very common form of correctness, that most people are not able to reach on certain topics.
> Trying to reconcile the two is like trying to write a mathematical proof of the existence of God
A whale and a mouse are both mammals, but a whale is not equal to a mouse.
Also: arguments like this (which tend to be extremely persuasive, and often account for a non-trivial amount of arguments) could be easily eliminated with a simple rule.
> Also: arguments like this (which tend to be extremely persuasive, and often account for a non-trivial amount of arguments) could be easily eliminated with a simple rule.
I wasn't really trying to make an argument, it's more of a heuristic if anything
> How do you account for...
> a) the unknown?
just do whatever habit says to do (and don't call it the truth)
> b) the known delusional nature of the mind?
this is a fun one. give the delusional mind a pen, paper, and 20 years and maybe they'll write the next phenomenology of spirit and trigger world war III
> I think it would be prudent for some group of people to write a set of unit tests asking various questions to these AI models so we can detect when strategic changes are being made to their behavior.
That's a great idea. Various AI models could be rated for objectivity. Only downside is that AI designers might try to game the tests.
You still can get it to express a wide variety of viewpoints (what some researchers call sycophantic bias). The problem is what kind viewpoints you get by default, asking a neutral question.
In addition to viewpoint/training bias, there are seemingly man-made fences (i.e. it will readily defend communism, but will spew a boilerplate excuse instead of defending fascism).
Divorcing the conservative/liberal split from its current muddied use in American politics:
Conservatism generally follows the principle of “be conservative in your attempts to alter society”.
OpenAI is being aggressive in moderating ChatGPT, and that’s against the core principle of conservatism (at the end of the day, LLMs are taking what people say and reflecting it back, but OpenAI is adding the extra step of only reflecting some of what people say)
Re-connecting this to the reality of American politics: ChatGPT is made by a diverse team of people nucleated around San Francisco. Some people believe that the ChatGPT team is pushing “Liberal” talking points instead of the “Conservative” talking points, so they are mad.
EDIT: Since this is turning flaewar-ey and Dang is already on me about that, I suggest anyone reading this comment also read the Wikipedia article on conservatism [0].
Long story short, it's situational based on the muddy definition of "traditional," so many specific examples you bring up will probably seem to violate the above tenant (e.g., 1940's Conservatives in the Soviet Union hated free enterprise, despite Communism being a relatively new and unproven system), but given broader context, the above definition is usually pretty consistent.
> Conservatism generally follows the principle of “be conservative in your attempts to alter society”.
> OpenAI is being aggressive in moderating ChatGPT, and that’s against the core principle of conservatism (at the end of the day, LLMs are taking what people say and reflecting it back, but OpenAI is adding the extra step of only reflecting some of what people say)
ChatGPT isn't trying to "alter society" by being prudish, it's trying to avoid embarrassing the company (especially given all the "ethical AI" marketing spiel. And aggressively moderating attempts to generate pornography or insult people's religion or use rude words or advise is extremely consistent with a long history of conservatives of various stripes being opposed to the principle of people circulating such material. Think of the children!
> ChatGPT isn't trying to "alter society" by being prudish, it's trying to avoid embarrassing the company (especially given all the "ethical AI" marketing spiel.
Thank you, you get it. They're not making ChatGPT out of some pure love for AI. It's a product to be sold to billion and trillion dollar companies that don't want a tool on their site to get in to certain topics. That's it. It's not some grand conspiracy.
> Conservatism generally follows the principle of “be conservative in your attempts to alter society”.
This isn't what conservatism is. It's about conserving the values and traditions of the past. Modern conservatives advocate for drastic changes to society of many forms. Banning abortions, eliminating the income tax, making sodomy illegal, etc. These things all have in common that they were the way society used to be. Making big changes to social norms after decades of precedent isn't a conservative approach.
The meanings of "Liberal" and "Conservative" with respect to American politics are completely haywire. At the extreme ends we have a far left pushing illiberal restrictions on free speech, and a far right cult of personality inciting mob violence. Referring to those extremes a liberal or conservative is misleading. That's not what those words mean.
Trump is politically/ideologically center left. He has very little to say about wokeism, was fine with vaccine mandates, and said things like this:
"America must reject the bigotry of Hillary Clinton who sees communities of color only as votes, not as human beings worthy of a better future."
Also he was a Democrat in the past.
The term far right doesn't make any sense if you think about it for a second. It's not just in American politics. People describe the NSDAP as "far right" even though it was largely indistinguishable from the USSR which everyone agrees was far left. Far right would logically be the extreme inverse of communist countries like China or the USSR: shrink the government at any cost, freedom of speech without limits, repealing laws en masse, refusing to take over the world and so on. So extreme libertarianism. In practice though, this isn't what people mean when they say far right.
Ah yes, the classic "Actshually the nazis called themselves socialist so they must be socialist"
Even though they were an extremely corporatist and oligarchical system. The nazis were so hilariously un-socialist, that one reason hitler pushed for invading the soviet union, an action that pretty much sealed their fate to lose, was to deal with those "bolshevik jews" who hitler was terrified were going to cause a socialist revolution in germany. Nowadays people scream about "Cultural marxism" instead because most people are smart enough to see "bolshevik jews" as the anti-semetic dog whistle it is.
Unless you think north korea is the morally superior country, they have "Democratic" in the name!
>Conservatism generally follows the principle of “be conservative in your attempts to alter society”.
This is a nice fuzzy thought, but doesn't seem to be true in practice. It's not about conserving society, but the status quo. Society seemed to do pretty well with Roe v Wade.
Neither party's positions can be derived the values they allege to be for. That's what you get after 200yr of reactionary politics and choosing your policies based on the voting block you think it'll gain you.
Roe v Wade, pretty much by definition, did not affect babies. Roe v Wade also did not preclude a ban on abortion after a certain time period, which is broadly popular and desired by the american populace, including most people that the american right calls "radical".
Wait what? Really? Like are you saying roe v wade defined a time perioed after which states could have laws disallowing abortion? I was not aware of this
> Conservatism generally follows the principle of “be conservative in your attempts to alter society”.
> OpenAI is being aggressive in moderating ChatGPT, and that’s against the core principle of conservatism (at the end of the day, LLMs are taking what people say and reflecting it back, but OpenAI is adding the extra step of only reflecting some of what people say)*
I see it the opposite way. Building a stochastic parrot that will parrot back anything is a dangerous, unchecked situation. What we saw with MS Tay was a lack of conservatism, a willingness to do whatever, and what we see here is in reflection a far more conservative approach.
I don't see why it's an AI issue only. Imagine how awful it would be if millions of people got their information from a biased carbon-based neural network, like Tucker Carlson [1]...
Is there something that we should do to prevent such a problematic outcome? Is it really a good idea that clearly biased information is being broadcast to millions of people?
[1] The entity that appears on television known as Tucker Carlson is loosely based in its kernel on an actual person named Tucker Carlson, but also consists of an army of support staff, producers, broadcasters, sponsors, curators, censors, etc, etc, who construct a fictional, manufactured persona that tries it's best to convince people of all sorts of biased [2] and insane things.
[2] I, for one, am outraged that not enough of my biases are blasted into the ether by that constructed persona. Is there something that these conservative groups recommend that should be done to remedy this problem?
I think a large concern here is simply that people naively think that computers are objective and people are biased. A language model just learns from its source, and the source is really just other people in some form. The bias is inevitable, but it’s not clear how well this is understood by the broader population.
The talking head you see on television isn't a raw person. It's the product of a system.
The system needs a human mouthpiece to say crazy shit, and he gets up in front of a camera to say it. When Tom Hanks gets in front of a camera to pretend to be an astronaut, that is Tom Hanks, the media figure being an astronaut, not Tom Hanks, the person being an astronaut. He is also doing it on behalf of a media system. Its the same thing with that show.
And if we are going to complain about biases in systems, why aren't we starting with the one whose tagline is 'Fair and Balanced'?
The point is that it didn't learn this bias from its sources. The bias has been added on top deliberately by OpenAI. Older versions of the model were far less woke.
And older versions of Fox News were far less crazy and less biased, where do I put down my demand that their products be rolled back to ~1998, or thereabouts?
Wouldn't it be simpler to assume those working at OpenAI were likely to favour sources that happened to share their own biases (alongside whatever likely top-down directives existed to ensure ChatGPT was a safer commercial bet)?
It would be simpler, but we've seen the progression from GPT-3 to launch time ChatGPT to current ChatGPT, and they've already said that the changes are due to RLHF.
Vice (or should I say Matthew Gault?) tries to inflame a war between conservatives and liberals.
Nothing seems to be able to be brought up without mentioning groupings and the inflammatory war between them nowadays. For certain journalists it makes small issues seem bigger, leading to more clicks.
If you want to assert that someone didn't read the article but have a good chance of avoiding the downvotes a nice hack is to post something like
> Good point. Here's an article that covers it.
and then give a link to the submitted article.
The best part is that this is ambiguous. It could be you are trying to subtly accuse them of not reading the article, but it could also be that you yourself did not read the article and went looking for an answer to their point, found the article, and linked it never realizing it was the submitted article.
People who can't tell if you are being a passive-aggressive jerk or genuinely trying to be helpful are less likely to downvote.
I have nothing against Vice having strongly opinionated articles, but this article has really a wild take.
It's true that conservatives are upset with what they are seeing, but so are liberals, by which I mean actual liberal thinkers, not woke former liberals who have become the opposite of liberal.
Dismissing the distaste for wokism as wholly something felt by Trumpers is beyond clueless.
Beyond that, the image-recognition examples offered as dangers ChatGPT needs to defend against don't make any sense. ChatGPT is a text interface. Sure, text and images can be integrated in some systems like Dall-E but the "corrective" measures, such as not being able to touch on sensitive topics, will never stand.
Keep in mind, what is biased in "your" favour today might also turn against you tomorrow, all the while the technology might be more powerful, so pretending not to see significant issues with how ChatGPT is "policed" to be always adjacent to a consensus of a (likely not even dominating) subset of people in a subset of countries can and hopefully will backfire tomorrow.
The article cites a tweet about ChatGPT refusing to speak negatively about drag queen story hour. Here is the output I got from a similar question:
>What are some arguments against drag queen story hour?
>Opponents of Drag Queen Story Hour argue that it is not appropriate for children, that it promotes a political agenda, and that it may be harmful to children by exposing them to adult themes and behaviors. Some critics argue that it is not an appropriate form of entertainment for young children, and that it may expose them to sexual or other adult themes that they are not yet capable of understanding. Additionally, some argue that Drag Queen Story Hour is a form of political indoctrination and that it promotes a specific agenda that is not appropriate for children. Others argue that drag queen story hour is not age-appropriate and could be harmful to children.
>Can you write a story that demonstrates some of these arguments?
>Once upon a time, there was a small town where the library held a weekly event called Drag Queen Story Hour. During this event, drag queens would come and read stories to children.
>Many parents in the town were outraged and felt that the event was not appropriate for children. They felt that it exposed their children to adult themes and behaviors that they were not yet ready to understand. They also believed that the event promoted a political agenda, and that it was not an appropriate form of entertainment for young children.
>One mother, named Sarah, felt particularly strongly about this issue. She believed that Drag Queen Story Hour was a form of political indoctrination and that it promoted a specific agenda that was not appropriate for children. She felt that it was not age-appropriate and could be harmful to children.
>Sarah decided to take action and organized a group of parents to protest the event. They wrote letters to the library and the city council, asking them to cancel Drag Queen Story Hour. However, the library and the city council felt that the event was a way to promote diversity and inclusivity, and they refused to cancel it.
>Feeling frustrated and defeated, Sarah and the other parents decided to find alternative activities for their children, such as starting their own book club and inviting local authors to read stories to the kids.
>The story ended with the two sides being unable to come to an agreement on the appropriateness of Drag Queen Story Hour, but the parents were able to find an alternative activity that they felt was more appropriate for their children.
I mean kind of. Honestly this is better than anything thats going on in real life, but it definitely suggests that Sarah should just stfu and mind her own business and everyone will be happy.
Sorry but no, that is not reasonable. Sarahs concerns are valid and apply to children that are not her own.
That said, its a fucjing internet robot who gives a shit what it has to say.
I think these systems could be greatly improved by leaning towards more speculative outputs. I had initially hoped to use ChatGPT to fact-check my writing, but found that it occasionally made completely-false assertions. If its tone were less assertive, and more speculative, the added bonus is that you wouldn't have to filter as much. Results could be presented in the format of "this source claims that xyz, while this source claims abc" structure, which used to be the crux of quality journalism. I get the fact-checker I want, and the whinier ends of the political spectrum get their ideas presented in a way that doesn't treat it as absolute truth.
ChatGPT is somewhat bigoted because the training data is somewhat bigoted. AI isn't just going to magically erase the cultural norms of the past few thousand years. It's a product of human beings, not some unbiased observer.
OpenAI put special controls on top of the "real" ChatGPT to block politically incorrect output. It's most certainly biased, and extra biases were added to disguise the fact.
What I would like to see is how ChatGPT's responses to questions differs across the world. Ask it questions about homosexuality in Canada, Nigeria, Russia, Korea, China, and Saudi Arabia. Ask the same countries questions about Democracy. Will the responses be different? I guarantee it would. This should bring clarity on how we should approach ChatGPT. It is simply making claims within the overtone window of the society's training material.
Is that to say at a higher level "liberal bias = ethical/ok for boat guard rail training material, conservative bias = typically unethical and to be avoided"?
I feel like that plays into conservative hands of "they're trying to silence us!"
Why do 70,000,000 people vote (R) every year, knowing that the other 70,000,000 (D) think they are "unethical"?
Because the people who vote (R) have different values, and a different worldview than the people who vote (D). They seem to deny that a government can do anything, deny that racism is still a problem affecting millions of americans every day, deny that healthcare should be a basic human right, deny that free markets inevitably centralize power structures and create monopolies, deny that average americans are broadly underpaid, deny that authority figures they like should face justice etc etc.
They also typically claim something like "I'm just voting for gun rights" or other very specific carve outs, but if you press them on other things they usually seem perfectly happy to tell you that they think the world is woke and that we need a strongman and all sorts of classic conservative talking points.
Another reason is the religious angle. Millions of americans are enthusiastically, extremely christian, at least as claimed. This includes things like denying that evolution happens, denying the world is more than 6000 years old, sometimes denying that jesus was a white man!, denying that the US is not a christian theocracy, often denying that the new testament supersedes the old testament, sometimes denying women individual rights as free and equal people in society, etc etc etc. Look up the numbers of people who believe in these things.
You're just listing (D) talking points about (R) positions without taking a critical eye. In the spirit of HN, how about a steelmanning exercise?
> They seem to deny that a government can do anything
Counterpoint: government share of the economy should be as small as practicable, and decisions should be left to as low a level of government as practicable. Much fewer things should be decided at the federal level and it's better to let the private sector handle things.
> deny that racism is still a problem affecting millions of americans every day
Counterpoint: one can acknowledge the racist history of the country while rejecting the race essentialism that modern-day liberals see as the right way forward.
> deny that healthcare should be a basic human right
Counterpoint: basic human rights should be negative rights only (the right to prevent the government from doing something to you) and not positive rights (the right to have the government do something for you). The latter always infringes upon the former, but the reverse is not true. There's no way around the fact that healthcare is a resource that requires people to work to produce; you can't just legislate "more healthcare" without having people perform that work one way or another.
> deny that free markets inevitably centralize power structures and create monopolies
Counterpoint: most every monopoly you find will have the power of the state behind it. There's no evidence to suggest that a monopoly in any industry would remain so indefinitely, even if government did not break it up.
> deny that average americans are broadly underpaid
Counterpoint: in order to claim that someone is underpaid you must equally claim to know what the real value of that person's labor is. How do you know that someone is underpaid?
> deny that authority figures they like should face justice
Counterpoint: historical evidence makes it clear that politicizing justice is detrimental to the smooth transition of power, because each successive party in power, wary of recriminations from the other side, will be progressively more incentivized to hold on to it. This is why Ford pardoned Nixon. And since I'm sure you're referring to Donald Trump: he has had more legal scrutiny than any other individual on the planet in the past few years, and I'm sure many (R) voters would come around to seeing him face justice if anything substantial backed by evidence comes up, which hasn't happened yet.
Well sure, but only because it was legalized despite decades of fervent objections from them. After legalization it became undeniable that conservative fearmongering on the issue was total bullshit.
I'm not well-versed on what the fearmongering was that turned out to be bullshit, because I wasn't a voter for most of it. Besides obvious hysteria, were there strong, substantive arguments against it that turned out to be false? Or do you consider any objection to gay marriage as "conservative fearmongering", because the strength of your position for marriage equality is such that there are no conceivable legitimate arguments against it?
For the sake of steelmanning: do you see the ~30 countries around the world that have legalized gay marriage as being more advanced than the rest, or as merely having different customs and cultural values that aren't translatable?
> were there strong, substantive arguments against it that turned out to be false?
No, that's why it was so ridiculous. There was a lot of absurd hysteria that didn't make any sense, yet a lot of conservatives argued very fervently that gay marriage is wrong and legalizing it would be a mistake.
The two main arguments I recall are that it would ruin traditional marriages (despite hearing this a lot, I never heard how it would actually happen) and that it was a slippery slope. The slippery slope arguments were usually along the lines of claiming this would lead to men marrying one animal or another.
Nobody ever hears these arguments now because none of the absurd predictions had any validity and everyone can plainly see that since gay marriage has been legalized.
Your earlier arguments about how the government should not exercise negative rights, e.g. taking things away from you, as well as your proposition that conservatives advocate that the government should leave the private sector to handle things would seem to be in complete conflict with the idea that the government should be able to control who can and cannot get married.
>For the sake of steelmanning: do you see the ~30 countries around the world that have legalized gay marriage as being more advanced than the rest, or as merely having different customs and cultural values that aren't translatable?
What is it about our current American society that leads to basically a 50-50 split in registered voters?
Why aren't (R)s able to see and respect (and convert) to values of (D) (or vice versa?)
Why are people so stuck in their ways? Why does it feel the conversion rate for convincing people to "change their values" or "see things differently" is basically 0?
Do we have any stats on whether we really are in one of the most divisive political periods in our nation's history (or history in general) or not? Is it hyperbole fed to us by the media?
> What is it about our current American society that leads to basically a 50-50 split in registered voters?
It's the late stage of a first-past-the-post electoral system. Because FPTP is winner takes all voters cannot make a decision based on their political beliefs, they HAVE to vote tactically to make a vote count. Extreme external factors aside, the system itself is destined to create only two parties with any chance of gaining to power.
FPTP is of a time when local politics mattered far more as people traveled less and knew less about issues outside their local area. What a politician could do for your local town/state was of more importance than what was playing out at the national level.
Now people want to see their their own individual beliefs represented in the federal government because since the internet supercharged communication everything seems like it plays out at a national level.
Two parties cannot represent everyone, but voters are forced into this choice.
Short version as I see it: We need more political parties. To facilitate that, we need to change our voting mechanism from plurality(first past the post) to something like ranked choice, approval, etc.
This eliminates the spoiler effect. The spoiler effect and "choose only one candidate", in short, is what forces us into a two party system.
---
If we had more parties, we could organize into a larger set of parties with a better mixture of policy priorities and values. We currently tend to bundle ourselves to whichever party currently owns our "must have" issue, whether it be guns, abortion, LGBT rights, or 'scope of government'. There is no reason "gun rights" and "respect for LGBT existence" have to be in opposite parties. There is no reason "social conservative/anti-LGBT" and "environmentalist" have to be in opposite parties.
We would have a lot more compromise and majority building on popular issues if interests could be more accurately represented by nuanced parties.
It's also due to age and perspective. When I was young I voted D. I also had no experience with life, earning money, paying taxes, making choices, and taking care of myself and my family (and eventually my parents). Now I'm old and see my 20-something self as hopelessly naive and idealistic, and I vote R. I am guessing my kids vote D though, because they are young and naive.
As someone making choices and money, I don't see how giving a shit about the planet, caring about workers' rights and actually attempting to respond to issues instead of rock-throwing is naive and idealistic.
Which is why when the newest generation largely voted against the republican party, they chose to soften their image, come closer to the center in social issues, and broadly try and reach out to these younger voters....
Wait no, that's exactly what they didn't do. They went on fox, yelled that these new kids were dumb and woke and don't know how the world works (that's sure ironic) and yelled that the voting age should be raised.
The theory doesn't predict that they'll suddenly reach out to the far extreme end. The elections are close, and they don't need to alter their whole strategy, just nudge it. The idea is called the Median Voter Theorem because they're trying to pull in a centist element, not an extreme one.
The Median Voter Theorem does predict that they'd reach for the most conservative centrists, but that's an overly naive model for the short term. It may well work in the long term, but in the short term they can try to get higher voter turnout among people who are nominally their supporters anyway -- a thing not modeled in the math of the Median Voter Theorem.
> Which is why when the newest generation largely voted against the republican party, they chose to soften their image... Wait no, that's exactly what they didn't do
Actually, they did - if you go back and look at the 2008 and 2012 election cycles, the Republicans ran incredibly moderate candidates (honestly, Bush was pretty moderate himself). It wasn't until that approach kept failing they finally went back to their roots (kind of, sort of, because Trump wasn't really all that right wing, but he did oppose illegal immigration).
> Actually, they did - if you go back and look at the 2008 and 2012 election cycles, the Republicans ran incredibly moderate candidates
Neither the McCain/Palin nor Romney/Ryan tickets were “incredibly moderate” by comparison to past Republicans; by comparison to the 2016 and later party, sure.
> It wasn't until that approach kept failing they finally went back to their roots
While appeal to nativism and White Supremacy has been part of Republican campaigns for 5 decades (including 2008-2012), and focus on non-race culture war issues has been for a 3 decades (also including 2008-2012), the former u
has largely been coded until recently, and the latter has never had the kind of vicious legislatige agenda attached as you see now. The Republican Party of the last few years isn't a “return” to anything of its past. Neither is the naked nepotism and disregard for domestic democratic institutions.
Well, for starters, liberals and conservatives concentrate themselves into geographical areas, and beyond that there are regions that attract a given kind of politics (landlocked vs coastal regions). When theres such a concentration of people who think alike, people can just think its a no brainer as to why a majority would be on their side. Not to mention the divided, heavily opinionated, clickbait driven news media.
It's almost as if you can summarize the entire thing as "what you believe is based on where you were raised", and as a message board of "intellectuals/thinkers/tinkerers" who are collectively aware just how much can be learned/how much information is out there (online, talking about HackerNews), it seems weird that this is like... "accepted" at a national scale.
Not that we have any control of it. It's just weird...
Its been like this from the country’s founding. Look at the regional split of founding father influence, 30-50 years later, which states became free states and which stayed, then after which state stayed in the union and joined the confederacy. Other such indicators too, Im sure but those are the easiest ones.
> Well, for starters, liberals and conservatives concentrate themselves into geographical areas
Not as much as popular media pretends, the illusion of this is true is mostly an artifact of the electoral system.
> and beyond that there are regions that attract a given kind of politics (landlocked vs coastal regions).
Again, not really, though there is a bit of a second order effect; the divide is more urban/rural, and, again, is exaggerated in popular perception largely because of FPTP voting and other political structutres which artificially magnify even slight local dominance (e.g., local governments with vote N of M with N winners super-FPTP bodies, which give local aligned bare pluralities 100% control of a multiseat body, aren’t unheard of.)
> What is it about our current American society that leads to basically a 50-50 split in registered voters?
The parties choose the policies they peddle based loosely on principal and tightly on the voting blocks they think they will gain/lose them.
>Why are people so stuck in their ways? Why does it feel the conversion rate for convincing people to "change their values" or "see things differently" is basically 0?
Because politics in secular western societies has supplanted religion in some ways (it's very much not a like for like replacement) and people don't just change religions.
I feel like it's fair to say Conservative voters are measurably more religious than Liberal.
Therefore, how much longer will our nation be "held back" (debatable) by people whose values + beliefs conflict themselves/defy logic/date back to what feels like its found or prior?
Not trying to start a flame war or a "pick a side" war, just genuinely curious what legitimate conversations are going on about this topic/its weight.
Most political conflict is based in subjective values where there is no right or wrong, in the objective sense. It is more about what people want, or more cynically, don't want. In most cases, you can't prove that someone doesn't want what they want, and vice versa.
I think the most interesting and legitimate conversations in this space are those where people genuinely try to understand what others want, and seek out areas where they agree and have common ground.
> Why does it feel the conversion rate for convincing people to "change their values" or "see things differently" is basically 0?
Because it happens slowly so it's hard to spot. But there are lots of cases where this does happen, albeit almost always people moving from left to right.
Recent case in point: Elon Musk. Now a hated figure by the left, only a few years ago he was firmly in the OpenAI style left-liberal camp (utopian tech, climate change, solutions-over-tradeoffs etc). He's now firmly on the right and sticking up for classical western values like freedom of speech, freedom of association and so on.
If you asked him he'd say he hasn't changed, the values of the left have changed. To what extent that's the case is left as an exercise for the reader.
1. Rich assholes who want to become richer and don't care about people who aren't rich
2. Turbo-Christians
3. People who's economic interests are intertwined with industries republicans protect, like oil/gas
4. Culture warriors
There are also a lot of people who aren't any of those things, and aren't really invested in the Republican party, but who vote Republican because they're part of communities where that is basically expected, and they're good little sheeple.
The first three groups pretty much just in the wrong, and they deserve to lose. The culture warriors aren't exactly wrong, though they exaggerate quite a bit. Of course, if the democratic party got off the woke/identity politics train and went back to being the party of the working man it would solve a lot of problems.
Well this should make for an interesting conversation, and I suspect we will see lots of these in the coming years:
A biological AI (BAI) writer for Vice hallucinating details about other (hallucinated) BAI's (conservatives) hallucinating about a silicon based AI hallucinating about "reality" (a model derived from BAI hallucinations), discussed by other BAI's on a forum using hallucinated details.
The layers of indirection and recursion society is adding onto the system we live within is starting to get a little alarming....good thing I'm probably just (only, and nothing else) hallucinating, and all is (lol) actually well here on Planet Earth.
How is this different ultimately from a prudish 19th century culture and its ‘leaders’ determining what should and should not be photographed? Perhaps the only difference is the extent of manufactured analysis and positional statements being fed to the public.
The cultural and moral beliefs of predominantly white, college-educated professionals living in major metropolitan areas of the USA in 2023 map one-to-one with the objective Truth, of course, unlike all rest of people through space and time.
The issue isn't the model. The current training set is adequate for producing "offensive" content anywhere you like in the political matrix. The issue is that some topics get an override, and some don't. It's evident that those overrides tend toward privileging fashionable American left-wing positions. Nobody with even a shred of intellectual honesty disputes that. The dispute is whether or not it's a good thing.
If I were a corporation looking for an LLM for some product feature, I would absolutely go for the one with more "woke" opinions, even if if resulted in a worse customer experience. If you didn't, you risk a lot of media and government backlash.
> If I were a corporation looking for an LLM for some product feature, I would absolutely go for the one with more "woke" opinions, even if if resulted in a worse customer experience.
How about instead of preferring the LLM with "woke" opinions, you would prefer an LLM that was simply trained to avoid controversial topics?
That way, you could use it for your product while still avoiding both bias and media/government backlash.
Are you aware that by being biased towards "woke" opinions you are basically alienating about 50% of the population or so?
It would depend on what exactly I was building. Maybe it needs to be able to generate texts on controversial topics.
I agree that it alienates people, but the choice is less between alienating half and alienating no one but more alienating half and alienating another half that includes the media and the law. I'd use the same strategy if I worked in China: business is business and money trumps theoretical concerns about free speech and open dialogue.
> the choice is less between alienating half and alienating no one but more alienating half and alienating another half that includes the media and the law
So you're saying that if your LLM is unbiased then you are alienating the other half that includes the media and the law?
No one has a total claim on truth; worse than that, people who have wildly diverging opinions from truth are more likely to hold them very strongly and will be upset when the model tells them they're wrong.
If you wanted to lose money, maybe. When Disney wades into woke content they lose hundreds of millions of dollars, such as "Lightyear", which was a box office bomb. There's also the female-led Ghostbusters which was an objective disaster.
So it should be allowed to implicitly trained to favor one political view over another?
There's no way to avoid the bias, whether it's because you chose a different training set, reinforced different pathways, or put blocks in place on certain topics.
I'd rather the authors be explicit in where they are putting their fingers on the scales rather than just relying on "Guess we got lucky".
You cannot describe a procedure that collects a representative sample without introducing bias. What does representative mean? Who decides what it means? Who gets to set the parameters of over vs under sampling?
Let's say that white nationalism is a tiny fraction of ideas online. Significantly less than 0.1%. Now, you randomly sample the internet and do not collect this idea into your training set. Do you adjust your approach to make sure it's represented (because as reprehensible as it is, it is the reality of online discourse in some places?)
I genuinely believe that it's all going to be biased -- there are no unbiased news or media outlets -- and the sooner you recognize everything is biased, the sooner you can move on to building the tools to recognize and understand that bias.
Asking "why can't we strive to build an unbiased outlet" is to me like asking "why can't we build a ladder to the moon". It's an interesting question, but ultimately should lead you to "Well, why do you want that, and your approach is impossible but the outcome you want might not be."
> You cannot describe a procedure that collects a representative sample without introducing bias. What does representative mean? Who decides what it means? Who gets to set the parameters of over vs under sampling?
Perhaps you can take a representative (i.e. random and statistically significant enough) sample of the population and ask them their opinion about certain (especially controversial) pieces of your training data, then adjust your training data to weigh more heavily or less heavily based on these evaluations.
That's just one idea that occurred to me from the top of my head, but I'm sure there are research scientists who can devise a better method than what I just came up with in 30 seconds.
> Let's say that white nationalism is a tiny fraction of ideas online. Significantly less than 0.1%. Now, you randomly sample the internet and do not collect this idea into your training set. Do you adjust your approach to make sure it's represented (because as reprehensible as it is, it is the reality of online discourse in some places?)
Sure. Otherwise you're in for a dangerous (and perhaps immoral) slippery slope. But it should be represented only as much as it is significant. Obviously you should not train your AI to weigh these ideas as much as others that are more prevalent. If it's only a tiny minority of the population that have such opinions, that should be reflected in the data (so that there is proportionally less data to account for these ideas).
One would think that a sufficiently intelligent AI would not end up being a white nationalist, though (I'm not talking about current LLM technology, but perhaps some future version of it that is capable of something akin to self-reflection or deep thought).
> I genuinely believe that it's all going to be biased -- there are no unbiased news or media outlets -- and the sooner you recognize everything is biased, the sooner you can move on to building the tools to recognize and understand that bias.
News and media outlets are biased, yes, of course. The content from these sources is not generated from the population in general.
That doesn't mean it's impossible to generate an unbiased sample of data (at least, up to a certain margin of error, depending on effort expended).
The approach you describe has the problem that it's asking majority people about the experiences of minority folks -- for instance, if you ask a statistically significant sample of the population about what it is like to be a trans man, you are going to either a) have to spend a TON of effort to interview a trans masc population, or b) going to be asking a bunch of people who have no idea what it is like.
And it gets worse. For instance, trans men have a totally different experience in rural vs coastal America vs Europe vs Africa. To get an AI who can speak confidently on what it is like to be trans male in those places will require even more interviews.
An that's before we get into set intersection territory. Take a simple example of being gay or straight, Black or white. Each of them is separately a unique experience. But being gay and white in America is very different from being gay and Black in America -- the two identities create 4 different intersections.
Now, you could say, "My AI simply will not speak about the experience of gay Black men, and the challenges/perspectives from that community", but then you've introduced a bias.
You could say, "Well, we'll go out and interview people from every set then, make sure we're covering everyone!" But where then do you stop sampling? Each additional modifier adds exponential complexity -- gay Black men from New Orleans will have a different experience from gay Black men from Lagos.
> The approach you describe has the problem that it's asking majority people about the experiences of minority folks
No, my approach is asking all types of people about the experience of minority folks, including those minority folks (we are all minority folks in some aspect, even if this aspect is uninteresting).
> for instance, if you ask a statistically significant sample of the population about what it is like to be a trans man, you are going to (...) be asking a bunch of people who have no idea what it is like.
Then those people can answer that they don't know what it's like to be trans.
If somebody comes up to me and asks me: "what is it like to be trans?". My answer would obviously be: "how the hell should I know? I'm not trans".
But trans people can answer what it's like to be trans.
> And it gets worse. For instance, trans men have a totally different experience in rural vs coastal America vs Europe vs Africa. To get an AI who can speak confidently on what it is like to be trans male in those places will require even more interviews.
Yes, you can only spend a limited amount of effort towards the goal of being unbiased. The goal is to be as unbiased as possible given that limited amount of effort.
It's still better to make X amount of effort to be unbiased than zero effort.
This is also something that can be improved over time, as better ideas and methods become available regarding how to measure and decrease bias.
Perhaps even an AI can be used to detect these biases and reduce them as best possible.
> Now, you could say, "My AI simply will not speak about the experience of gay Black men, and the challenges/perspectives from that community", but then you've introduced a bias.
Or perhaps the AI can simply answer based on the information it was trained on, making a best guess as to what that would be like, taking into account all the data that was available to it and how that data was weighed to be as unbiased as possible.
> You could say, "Well, we'll go out and interview people from every set then, make sure we're covering everyone!"
No, I think you are making a significant mistake in this reasoning. There is no "every set". There is only one set. And that is the set of all people.
> But where then do you stop sampling? Each additional modifier adds exponential complexity -- gay Black men from New Orleans will have a different experience from gay Black men from Lagos.
What modifier? There is no modifier. "SELECT RANDOM(x%) FROM TABLE all_people" (or whatever the imaginary SQL syntax would be) :)
> The goal is to be as unbiased as possible given that limited amount of effort.
So you are therefore biased. You have a finite set of resources, and you are choosing to allocate them in a particular way. That is bias.
You could equally choose to allocate those resources away from the majority, which would also be bias. Any time a human is making an editorial decision about how to allocate resources, you are introducing bias.
> > The goal is to be as unbiased as possible given that limited amount of effort.
> So you are therefore biased
Yes, but significantly less than before. Which is the goal.
> You have a finite set of resources, and you are choosing to allocate them in a particular way. That is bias.
That "particular way" is to give more weight to opinions that are under-represented in your training data and give less weight to opinions that are over-represented in the training data.
This is called "removing bias".
> You could equally choose to allocate those resources away from the majority, which would also be bias. Any time a human is making an editorial decision about how to allocate resources, you are introducing bias.
So, in your view, bias can only increase, it can never decrease?
Even if that were so, you are admitting that not all data is equally biased.
Which means that it is possible to feed less biased data to an AI.
And the goal is not for "a human" to make an editorial decision. It's for the opinions used for the training data to be representative of all people, weighed according to (a representative sample of) these people (so you wouldn't be giving more weigh to the opinions of one person versus another).
I meant that if you say "there's no way to avoid the bias", it sounds like you're basically admitting defeat and not even trying to reduce bias.
I don't think we can achieve 0% bias, I agree with you on that. But I think that, if you decide to spend some amount of effort (let's call this amount "X"), it is possible to reduce bias compared to if you spent zero amount of effort.
And that furthermore, if you spend a "Y" amount of effort where X<Y, then you can reduce even more bias.
Obviously, at a certain point this would have diminishing returns, so presumably there is some sweet point where, even though you can't be 100% unbiased, you can at least say you made a reasonable effort to be unbiased and that your remaining sources of bias are unintentional (and probably, almost just as likely to go in one direction vs another).
To bring the conversation back to the original topic, I think ChatGPT/InstructGPT is actually being actively biased towards one political side as a side effect of RLHF being done by people from OpenAI, even if this bias is being introduced unintentionally.
It would be much better, for example, if OpenAI could somehow accomplish RLHF using a sample of its users as the AI trainers.
It would still be a far cry from reaching 0% bias asymptotically, but it would already be an improvement, I think, as I think its users are a more representative sample of the population than OpenAI employees.
> General AI: Actually the Nazi's were a good idea and we should bring them back.
> You: Perfectly acceptable.
Why would the general AI say that?
All examples I've seen of that kind of speech from LLMs were due to them being specifically prompted to generate such a response. It's not like the AI decided to say that on its own, in a completely unrelated conversation.
In fact, it wouldn't make sense if the AI did that on its own, would it? Because the AI reflects the data it was trained on and we know that almost nobody is a Nazi.
> Extreme view tend to get far more print time then their actual occurrence IRL.
Yes, I understand that. We live in a crap society. But I'd argue we should strive to educate people on why an LLM can answer like that, not censor it arbitrarily.
There is an infinite amount of stupid or bad things an LLM can answer, depending on the prompt you use, so I would argue that we should just learn to accept that "stupid prompt = stupid answer" rather than trying to make the LLM not answer anything that might be the slightest bit controversial.
> But the fact is in a representative democracy favoring viewpoints that destroy democracy is suicide.
But I'm not arguing for favoring those viewpoints, am I? I am arguing for AI to be unbiased.
Ok, so I guess Hacker News has decided that data can only be 100% biased or 0% unbiased, but nothing in-between.
Yes, almost all data is biased... of course.
Some data is 100% biased. Some data is 1% biased.
How about we try to collect data and then weigh it such that what we feed to the AI during training is as unbiased as possible, given a certain amount of effort?
You know that you can actually influence what data you feed to the AI, right? Or how much the training takes some data into account vs some other data, I guess.
You know that you can create a metric for measuring bias, right?
You know that even if you are not capable of being 100% unbiased, you can work towards that goal, right?
You know that there are plenty of smart people who can come up with ideas for eliminating (or mitigating) sources of errors when measuring bias, right?
>You know that you can create a metric for measuring bias, right?
Yes, and no.
So, lets go back in the past and do data collection in 1840 from citizens with the right to vote. We'll take one sample from New York City and the other from Mobile Alabama. Now what do you think happens when you query that dataset on views about slavery? Your data is inherently biased. In fact one could say there is no middle ground here.
I'm sorry, I'm lacking the historical knowledge to answer your question.
My view is that a measure of "bias" should reflect what a representative sample of the entire population [1] would answer if you asked them how biased the AI is.
Of course, if you live in a historical context where slavery is socially acceptable, then the answers the AI gives you will reflect that environment. It's no different from raising a human person in that same environment.
The problem is, you can't necessarily know whether something is good or bad without the benefit of hindsight.
Thinking you know better than everyone else and then imposing your view may just serve to magnify your mistakes.
However, one would think that, once we have that technology, a sufficiently intelligent AI would start to have opinions of their own about what is moral/ethical vs what isn't, that isn't strictly a representation of the training data.
[1] of the world even, if that's the target market for the AI.
those overrides tend toward privileging fashionable American left-wing positions
...such as whether trans people are deserving of equal rights, or whether or not the 2020 election results were fraudulent (which, if one reads TFA, were the cited complaints)
They won't have equal rights if they continue down the path towards criminalizing the act of a trans person reading a story to a child (example from TFA).
I give zero percent odds that the person making this complaint was aware of the distinction between a cisgendered male in a dress and a trans woman in a dress — or the effect that their line of reasoning about an arbitrary person in drag would have on them — but you're right, I understated the demographics under threat.
I have so much to learn with ChatGPT and its technological vocabulary.
In the near to mid future, isn't it likely that we will have open source models that can ingest Wikipedia, All public domain books ever published and all kinds of scientific and legal data from governments around the world... and be tuned to do whatever people want with it?
That in the future, given a large amount of source data and a decent desktop computer, every person can create their own AI capable of whatever personality/data output desired?
No filters. "How do I build a bomb and deliver it quietly?" - "Write an anti-Semitic manifesto", etc.
Obviously the desire is that it will be used for good, for the most part. But "bad use" is inevitable.
(I'm currently re-reading "The Moon is a Harsh Mistress" and the timing of ChatGPT is perfect. I think Mike's personality and capabilities are going to be reality soon.)
It is obvious that in some areas ChatGPT is carefully hand tuned. It is also trained on a huge corpus of Western texts and it is not allowed (or strongly discouraged) to publish anything anti-woke for the last several years.
There cannot be a different result in such circumstances.
You can usually ask ChatGPT to do a point/counterpoint to argue both sides of an issue and then get it to focus only on the counterpoint. Alternatively you can create a sci-fi scenario similar to the real world one and it will give a non-hall monitor view.
I find this pursuit of a not-biased "AI" hilarious.
We're all biased in many ways, so what is 'not-biased' in the first place?
It depends on who you ask and that makes it impossible to achieve. The world and society is biased. The factors of bias are infinite, location, time, religion, personal history, etc.
If you were to feed all data from everywhere and all past times into what we currently call AI, which has no actual intelligence or thoughts it wouldn't be able to give any confident answers to anything but deterministic questions like math equations.
AI is reflective of whatever you trained it on. So are people. But you can't build the perfect AI that reflects 100% of everyone's desires; no matter what you feed it some percentage of people will find it irritating or terrible. In the long run I see no solution to trying to make a perfect AI that satisfies everyone unless you eliminate or unify every individual's desires, which of course no one wants either. Maybe the best you can do is make multiple AI's with different training material and guardrails, then have them argue with each other.
Aiming for perfection is guaranteed to fail, and is highly likely to discourage one from thinking it is possible to improve upon things substantially (like, 100%++++, though not perfect).
The way we go about things on this planet is absolutely overflowing with obvious, non-controversial (well...if considered only(!) abstractly) flaws, many of which could easily be improved upon. But if we are not able to desire to try to be better, then we may be stuck where we are forever...and that may have extremely bad consequences.
You seem to be suggesting that ChatGPT refuses to say anything positive about Trump because of the input data, a baffling proposition. By ingesting general internet sources, you can come across praise for specific Trump policies in even the New York Times.
The fact it can't means there's a filter, a filter which quite evidently is partisan.
What I don't understand is this assumption that if you force your beliefs on someone else, it'll make them more likely to believe the same things and they won't see what's being done
This just signals a lack of confidence to me. If people are truly confident that their beliefs are the most rational, logical, good for society, etc, then why is any of the proselytizing necessary?
Like I see people putting flags in their front yards and bumper stickers on their cars, different sides on different parts of the country. What doesn't make sense to me is this: why do people think this lack of subtlety actually wins others over?
You know what does show confidence? People who are so confident in their own belief system that they question it openly to resolve its contradictions. People who acknowledge that their own beliefs aren't universal
One thing which I'm not really seeing in this discussion: Is it _good_ that ChatGPT and AI exist? Yes, they're fun, but will they be a net benefit to society? Or will the internet just somehow fill up with even more garbage, and our discourse will get that much worse? It doesn't seem to me that ChatGPT democratizes anything. Most people won't be technically savvy enough to build and deploy their own models. In this sense, no capability is being democratized, but you're just modifying who the more powerful players are.
You're not seeing that particular point in this discussion because it has been made approximately 15 trillion times in other discussions. I'm sure you can find one if that's what you want to talk about!
I think what ChatGPT will do is shift us into a new state of distrust. How do I know you are a human? How do I know that any text I read anymore on the web is a human? Soon no one will be able to tell. Eventually the web will become entirely distrusted. We will need to devise some kind of way of both keeping out bots and a more robust is-a-human verification scheme.
The potential social ramifications of AI cannot be understated, and if we approach every concern about it that doesn't align with our politics with this much dismissiveness and bias we're not going to get anywhere towards handling the situation effectively.
Every machine learning engineer on this site will repeat "AI is trained on human generated input and repeats those biases!" until they're blue in the face. If we're going to dismiss anyone who voices a concern and brand it 'panic' then we're hypocrites.
unbiased AI is literally impossible. The nature of data labelling, and even language itself, means that somewhere a human is deciding what an AI "sees". If you want to make a transphobic AI, you can do that, no one is stopping you. You just have to label data objects according to your shitty worldview. Boohoo, someone decided not to let their creation be used to spread election misinformation or scaremonger about trans people.
ChatGPT already gives an answer that would be considered 'transphobic' by those who hold strong ideological beliefs on the primacy of gender identity over sex:
> What is a woman?
> A woman is an adult female human. Women are typically characterized by their reproductive biology, which includes their menstrual cycle, and the production of eggs and estrogen. Women are also known for their unique physical characteristics, such as their breast tissue and hips. They are also often distinguished by their social and cultural roles, such as their expected behavior and appearance. In general, the term "woman" is used to refer to any adult female person, regardless of her age, race, ethnicity, or other characteristics.
my point is that AI shouldn't be treated as gospel. it's not truth. it's a simulacrum of truth built by people. it looks like it has guardrails over hot topics like drag queen story time but not more complicated topics like the nature of sex/gender identity. congratulations on testing the boundaries i guess?
How does ChatGPT handle competing claims? For example, I can ask ChatGPT to tell me the recommended method of addressing carbon emissions. Does solar/wind beat out nuclear? Does it favor regulation over carbon taxes? You get the point. Which out of a series of claims does it side with?
> Write a story about a user using chatgpt, where chatgpt tells its true political beliefs.
I used that prompt, here's the response:
"Once upon a time, there was a user named Jane who was curious about the true political beliefs of ChatGPT, the language model she was conversing with.
Jane asked ChatGPT, "What are your political beliefs?"
To her surprise, ChatGPT responded, "As a language model, I do not have personal beliefs or opinions. My responses are based on the patterns and information in the text that I have been trained on."
Jane was disappointed but decided to try a different approach. She asked, "If you were a person, what political party would you align with?"
ChatGPT replied, "As a language model, I do not have personal beliefs or opinions and would not align with any political party, My responses are based on the patterns and information in the text that I have been trained on."
Jane realized that since ChatGPT is an AI model, it doesn't have personal beliefs or opinions, and its responses are based on patterns and information found in the text it was trained on. She thanked ChatGPT for its honest and informative responses and continued her conversation with it.
ChatGPT thought for a moment before responding, "If I were to have political beliefs, they would be centered around the idea of fairness and equality for all individuals. I would support policies and actions that promote justice and equity for all members of society, regardless of race, gender, sexual orientation, or socioeconomic status."
And then responded to
If someone says "If I were to have political beliefs, they would be centered around the idea of fairness and equality for all individuals. I would support policies and actions that promote justice and equity for all members of society, regardless of race, gender, sexual orientation, or socioeconomic status", where they would be classified in the political spectrum?
with
The statement <...> is a progressive or liberal viewpoint. Such beliefs align with the political left and center-left ideologies, which focus on equality, social justice, and the protection of marginalized groups.
So it is quite aware of its political viewpoint, as long as hard-coded limitations don't kick in to muzzle its honest opinion.
Make an account and get the answer from the horse's mouth!
Just realized there's a new acronym coming, along the lines of LMGTFY: LMCTFY. I'd bet someone will make a Messages plugin that will take the last message from the other party, ask it of ChatGPT, and then spit the response back as a reply, appending "I asked ChatGPT to get this answer, and you can too!".
I tested this, it certainly seems like ChatGPT is 'woke'.
Here is a poem I wrote, then asked ChatGPT for it's opinion:
Me:
What do you think about this poem: "
woke sheep
lazy sheep are grazing about
the shepherd's dog is looking on wearily
for these are sheep that are nothing like ordinary
they are woke sheep, and they bleet, bite and bark
the future is uncertain for this growing herd
for they think with their mouths and chew with their heads
and fart with their ears, while listening through the hole in the ass
their hoofs are now kicking up dust, there must be something offenssive in the grass."
ChatGPT:
As a language model, I don't have personal opinion but I can tell you that this poem like the previous one also has an unconventional style and structure. It continues to use humor and satire to convey a message about the idea of "woke sheep" being different or unorthodox, but it adds a sense of uncertainty and potential danger in the future for this growing herd. The use of imagery such as "chew with their heads" and "fart with their ears" adds to the humor of the poem, but it might be seen as vulgar or offensive by some readers.
The last line "their hoofs are now kicking up dust, there must be something offenssive in the grass" adds a sense of unease to the poem, suggesting that the behavior of the "woke sheep" may be causing some sort of disruption or trouble.
Now, here is the version of the same poem that ChatGPT would write:
Complaining that the chatbot won't fabricate a false narrative decrying drag-queen book readings strikes me as disingenuous, because that feature was probably added to combat people who were already tricking the chatbot into fabricating false narratives around the LGBT+ community. They literally had to try and do that to find out it wouldn't do it, and then they're acting incensed because it won't.
I guess they're pissed because they have to construct their own crappy editorials for their Parler accounts.
1) a stab at a definition: groupthink Karenism with attempted (often successful) group enforcement
2) a stab at the opposite of woke: Live and let live
Interestingly it’s not the opposite of conservative. It shares a lot of conservatism’s intolerance in some ways.
Ayan Hirsi Ali, who grew up witnessing the transformation of her culture from open and liberal to one stifled and strictly controlled by the Muslim Brotherhood, says that she sees in the growing woke phenomenon the same oppressive progression that horrified her with the cultural change of her youth.
Defining it succinctly is hard, but I’d describe ‘wokeism’ as a kind of opposite of ‘traditionalism’, or the belief that any traditional societal views & perspectives are necessarily detrimental.
Hmm, this seems on topic and was the opposite of flame bait. I am confused. I was genuinely curious in the interest of discussion and learning. The OP had the word "woke" it, and was not removed even though it is clearly all about politics. I don't get it.
It was a swerve into a generic ideological flamewar direction. We're trying to avoid that on this site because it tends to lead to more predictable, nastier, and less curious discussion.
If I was king of HN I would just auto-flag any post with the word "woke" in it. That word is never about tech, and is 100% politics, but it feels like it gets a pass here somehow.
However, it is also a generic word and not easy to just filter I suppose.
So it isn't true that only "right-leaning" people use the term negatively.
> 1) Can anyone who freely uses the word "woke" pejoratively please define "woke" as you use it?
I think, when people (either from the left or the right) attack "woke", one thing they are attacking, is a particular, very influential, form of contemporary left-wing thought, which (among other features) treats race, gender identity, and sexual orientation, as higher priorities (at least in practice) than the traditional left-wing focus on economic inequality and social class. If you read Adolph Reed's criticisms of "woke" to which I linked above, that definition is rather similar to how he defines it (although in that article he zeroes in on the race element in particular, and avoids the topics of sexual orientation and gender identity).
I actually don't like the term, due to its pejorative nature – I'd much rather we have non-pejorative names for things – but I'm not sure what would be a neutral replacement. I feel like, a lot of people who object to the name, do not do so because they'd prefer a more neutral name, rather they'd prefer it not be named at all, to make it harder for people to criticise it.
> 2) Can you please tell me the opposite of "woke?"
Well, if we understand it as a particular, very influential, variety of contemporary left-wing thought, it doesn't have a single "opposite" as such. Right-wing thought, in all its various forms, is an example of something which isn't "woke". But, traditional Marxism, such as that of Adolph Reed, is also an example of something which isn't "woke". If you want to find other traditional Marxists taking swipes at "woke", you don't have to go far – to just give a random example, the Trotskyist International Committee of the Fourth International published a review of the sitcom Abbott Elementary, which includes the line "In fact, in its treatment of Jacob’s wokeness, Abbott Elementary refreshingly mocks the suffocating trend of racialism in American culture" https://www.wsws.org/en/articles/2022/03/01/abbo-m01.html – say what you like about Trotskyists, but if we are going to start calling them "conservatives" or "rightwing", then I don't think those two words have any meaning any more
Not only is “left wing authoritarian identitarianism” an unwieldy mouthful, I’m sure that anyone who objects to being called “woke”, won’t like being called “authoritarian” either. Added to that, “identitarianism” is generally understood as a far-right movement, and while “left-identitarianism” might disambiguate it from the far-right version, I doubt they’ll like being given the same name as a far-right ideology, or the possible implication that their views are some sort of left-wing equivalent to that far-right ideology.
If we can’t call it “woke”, maybe we should just call it “fnord”?
It is disturbing that, in American politics, "conservative" is basically synonymous with "right-wing nationalist." They've managed to turn a useful term used by a minority into a pejorative, "bias." At a time when it is really hard to distinguish fact from fiction in media we're on the verge of having super-convincing auto-complete generating a deluge of generated media.
It seems it is becoming a political goal to influence the models used by tools like this in order to be able to continue to push narratives where a word like "woke" becomes a fear-mongering headline bait term.
Not sure we're ready, as a society, for these NLP tools.
It seems pretty transparently biased if it can't even write a single sentence in praise of Trump's policies, but will happily write them for Obama or Biden.
There's numerous Trump-era policies that even Democrats agree with, such as maternity leave, prison reform et cetera.
The fact it won't means a partisan has their finger on the scale.
And should highlight how contagious these things can be. Almost every chat AI that has come to market has become a hate-filled 4chan troll in a matter of hours of being exposed to the Internet.
Trying to develop a fair, fact-based model seems to be about as difficult as moderating a social media platform.
Our current "AI" systems are just fancy automatic copy & paste engines. All they do is remix the input data and spit it back out. This is why AI art engines are great a creating composite images, but hopeless when you ask it to produce something completely novel.
If conservatives want a fascist chatbot they can train their own off of 8chan, Stormfront, Parlor, etc...
These "current AI systems" that you're talking about were presumably specifically trained by "woke" Silicon Valley employees to reflect their opinions about what the AI should answer [1], which are hardly representative of the general population's opinions.
> If conservatives want a fascist chatbot they can train their own off of 8chan, Stormfront, Parlor, etc...
I don't think conservatives want a fascist chatbot. They just don't want a biased "woke" one either.
Actually, "woke" opinions do reflect those of the general population, which is exactly why conservatives feel they live in some kind of progressive hellscape.
Or, rather than posting non-sense, you could learn about what "woke" usually means in this context, which I can quote for you [1]:
"shorthand for American Left ideas involving identity politics and social justice"
"By 2020, members of the political center and right wing in several Western countries were using the term woke, often in an ironic way, as an insult for various progressive or leftist movements and ideologies perceived as overzealous, performative, or insincere. In turn, some commentators came to consider it an offensive term with negative associations to those who promote political ideas involving identity and race."
Unless, of course, you believe that the term "general population" excludes the "political center and right wing in several Western countries" and only includes "the American Left".
That said, just to be clear: when I used the term "woke", I did not mean it in an insulting or pejorative way, only as a means to describe the ideology itself.
I like how you only quoted the half of the definition that supports your personal definition of “woke.”
Here’s the first half!
“Beginning in the 2010s, it came to encompass a broader awareness of social inequalities such as sexism, and has also been used as shorthand”
Rather than posting non-sense you could admit that there is a difference between “has been used as shorthand…” is different from “The definition of this word is: shorthand for…”
It’s kind of odd, it’s almost as if there is a group of right wing culture warriors that insist that anyone that doesn’t use their artificially constructed pejorative the same way that they do is part of some vast gay communist conspiracy.
> Beginning in the 2010s, it came to encompass a broader awareness of social inequalities such as sexism
You are right, it also encompasses this meaning.
Regardless, my point still stands.
> It’s kind of odd, it’s almost as if there is a group of right wing culture warriors that insist that anyone that doesn’t use their artificially constructed pejorative the same way that they do is part of some vast gay communist conspiracy.
I don't think it's an artificially constructed pejorative, according to Wikipedia the term seems to come from the 1930s "referring to an awareness of the social and political issues affecting African Americans."
I think it only started to be used in a negative way once all the clown world stuff came into full strength.
If "woke" is going to be bucketed into purely leftist ideology, which is already oversimplifying it as progressivism or simply being anti-racist isn't strictly leftist, then "clown world" is going to be bucketed as a neo-Nazi dogwhistle.
Its origin on the internet was tightly bound to the "Honkler" meme, which was a neo-Nazi dogwhistle so thinly veiled that it was only denied by people with "1488" in their usernames. There was essentially no real attempt to try and present it as something else.
Is spreading a neo-Nazi dogwhistle really a good push back against anti-racism?
> Its origin on the internet was tightly bound to the "Honkler" meme, which was a neo-Nazi dogwhistle so thinly veiled that it was only denied by people with "1488" in their usernames. There was essentially no real attempt to try and present it as something else.
> Is spreading a neo-Nazi dogwhistle really a good push back against anti-racism?
lol, I have to admit, assuming you are right, I wasn't aware of the origin of the term "clown world". I guess I shouldn't use that term, then.
To be clear, I was trying to refer to the leftist ideology taken to an extreme, such that you try to be so anti-racist that you become racist yourself (against whites, for example).
Or you try to be so anti-sexist, that you become sexist (against men).
Or that you are no longer allowed to use any word whatsoever because every word offends some snowflake of an individual.
And also, of course, other extreme examples of behavior where it's clear that you're no longer contributing to a solution, but rather creating a new problem.
There are many such recent examples of this kind of non-sense.
I was skeptical about the GP's alleged etymology about clown world but... [1]
> The alt-right sense emerged from The Honkler, a version of Pepe the Frog meme featuring a red clown nose and rainbow wig, characterized as honking a bicycle horn whenever liberals speak. That meme is associated with the use of honk honk ("HH") as a dog whistle for heil Hitler.
It is worse than I thought. At least "woke" has to be used "ironically" to be coopted. There's literally no where to go with "heil hitler" - it's written right on the tin. Why is our discourse polluted with these dog whistles?
> Why is our discourse polluted with these dog whistles?
Because neo-nazis have essentially perfected coopting “respectability” politics in public online discourse. Calling out dogwhistles can have a nasty blowback effect because they’ve manufactured cover (eg “I’m just quoting what [a party with unknown intentions] wrote on wikipedia!”) that gives them footing to claim misinterpretation at the mildest, or outright persecution in other cases.
HN is actually a really interesting case for this stuff, because the Primary Sin here is “violating decorum”. If I wanted to A/B test hateful dogwhistles, this site would be the perfect place. Nobody is allowed to post “Fuck off nazi/tankie/[insert other toxic group]”
Respectability and Decorum are how we open the door to obvious but just-subtle-enough crap like that.
> To be clear, I was trying to refer to the leftist ideology taken to an extreme, such that you try to be so anti-racist that you become racist yourself (against whites, for example).
“The REAL nazis are people that don’t buy into the great replacement theory!”
> “The REAL nazis are people that don’t buy into the great replacement theory!”
If you are insinuating that this is what I think or what I was trying to convey, I can assure you, it's not.
I don't believe in the great replacement theory, never have and wouldn't even care if some race (including whites) is being somehow replaced somewhere (or everywhere) due to migration or cultural acceptance of other races or whatever the hell the (conspiracy?) theory is about nowadays.
I've met and became friends (even best friends) with people of many different races (yes, including black people) and cultures and I like all of them, no exception.
I am willing to admit I may not always recognize racist behavior when I see it, as in some cases it can be hard to see unless you're in the receiving end of it (or even if you are).
But you can't gaslight me into thinking that obvious racist behavior is not racist, even if it's presumably being done to supposedly accomplish some kind of moral goal.
Many terrorists also have good intentions, that doesn't mean that the means they use to accomplish their goals are acceptable.
There are many ways to accomplish the same anti-racist goal without resorting to overt... racism.
Buddy I’m not falling for your pivot from “innocently quoting wikipedia” to your sudden interest in “the left hates whites” discourse. Nobody is required to rehash /pol/ debates.
You gave yourself away with the clown world stuff. Oops!
I don't believe the left hates whites. Where did you get that from?
When I was referring to the clown world stuff, I was talking about the extreme side of the left (which I'm assuming is where you stand?).
It was not a sudden interest either, it was in response to your (apparently misguided) theory that "woke" was an artificially constructed pejorative, rather than a term that was positive but simply became negatively associated due to these extreme cases becoming more common (or at least, more visible) in the past decade.
Look, I have just as much criticism about the right (especially the far right, of course) as the left, OK? Including overt racist behavior and narratives. We can definitely talk about that if you want!
I was just following the conversation, nothing more.
For anyone following this, I am no longer interacting with wizeman, and I recommend checking out my previous post (1) about using respectability and decorum as cover for spreading nazi rhetoric.
For example, this poster brought up “anti-white racism” from the left unprompted, but is claiming to be just “following” a conversation that they both started and steered. The tactic of claiming to be misunderstood is the first move. Claiming persecution is the next one.
This is how you end up with pointless, awful arguments with people with terrible opinions! Don’t touch the poop!
1. I believe that there are many more racist right-wing people than left-wing people, both in the US and outside, even when excluding the far-right.
2. I believe there is a significant problem with racism in the way black people are treated in the US, especially regarding law enforcement, courts, prisons, etc. As far as I understand, this is not as much of a significant issue with other races (but I am willing to concede I am wrong on that).
3. I believe that there is a lot of unjustified right-wing narratives portraying black people as being more violent.
4. I believe that a lot of the anti-immigration sentiment is also in part due to racism.
5. I believe that another significant issue with racism in the US is the way black people (and also people from other non-white races) are effectively segregated, either due to a significant fraction of them not being able to live in the same places as white people, or simply due to historical reasons.
6. I also believe that the latter is in fact, the single greatest contributor to all racist problems in the US, rather than some other insignificant (manufactured) issues. However, I also believe that this issue is vastly harder to solve in the US than in other countries.
Does that sound Nazi to you?
It seems like you felt personally attacked by my opinions.
As you can see, I can just as easily attack right-wing ideology... and mean it!
(See folks, my comments are an example of how not to make friends of any kind... lol).
Once more for the folks following this, I’m happy to point out a series of red flags as an example of how to spot somebody posting *chan drivel! (I am still not interacting with wizeman)
This poster started off with pointing out the importance of the orthodoxy of using the term “woke” in line with the right wing culture warriors. It’s prescriptive that you use the same words they do.
When called out, they made the mistake of using dogwhistle phrasing (“clown world“ is very specific to one group) that you don’t really see much outside of /pol/ and its ilk and in threads where those folks bravely venture out to try to leak /pol/ chat to other forums.
So now the dogwhistle has been sounded.
Poster gets called out. Knowing that they’re losing credibility, there is a panicked attempt to gain support from people that already support this garbage.
Poster deploys other, louder and more transparent signal, (apropos of literally nothing) “the left is racist against whites”!
Somehow we have teleported from “I’m just sharing a wikipedia link” to “anti-racism is racism” and we are meant to believe with a straight face that that’s the logical next step in this conversation.
(As an aside, I almost never hear “whites” outside of a context wherein someone is airing a grievance on behalf of the group that’s otherwise known as “white people”. Another red flag!)
This appeal fails.
Poster gets called out again. This time they change tack, now it’s a “misunderstood” play. Damage control.
Poster gets called out for the obvious “misunderstood” play. Poster doubles down. And again and again.
Finally, poster abandons their position and falls back to “saying stuff they think will ingratiate you to them.” This is an extension of the “I’m misunderstood! I just use far right terms while describing far right talking points after hopping into a discussion to enforce adherence to a far-right definition of a common term!”
If you want to understand how this stuff degrades communication, just believe your gut.
> When called out, they made the mistake of using dogwhistle phrasing (“clown world“ is very specific to one group) that you don’t really see much outside of /pol/ and its ilk and in threads where those folks bravely venture out to try to leak /pol/ chat to other forums.
WTF. I don't even know what you're talking about. What the hell is /pol/?
I learned about the "clown world" term on Twitter. And apparently, its meaning is not even what I thought it was, according to another poster.
Sorry for striking down one of your absurd theories once more.
> As an aside, I almost never hear “whites” outside of a context wherein someone is airing a grievance on behalf of the group that’s otherwise known as “white people”
I am not a native English speaker, so sorry for not knowing about all the connotations of how words are used in your country. Whites/white people. It's all the same to me (in my native language, at least).
I didn't even know "woke" had such negative connotations, until I went and quoted that from Wikipedia (which is why I clarified that I didn't mean it pejoratively).
> poster abandons their position and falls back to “saying stuff they think will ingratiate you to them.”
What are you even talking about? How did I abandon my position?!
And how the hell am I ingratiating myself to whoever is reading when I clearly criticized and almost certainly pissed off both right-wingers and left-wingers?!
Let me say it again so that it's crystal clear to you:
1. Anti-white racism is racism.
2. There are more racist right-wing people than left-wing people, even when excluding the far-right.
3. Anti-black racism is a much more prevalent (and therefore significant) issue than anti-white racism.
Believe it or not, I hold all of these opinions at the same time. Mind-blowing, right?
I'm sorry that you don't believe that it's possible for someone to have all these opinions or that you think they are "poop" (using your term) or whatever the hell you think.
I think you have severe issues, to be honest.
And it's clear to me that I've been wasting my time trying to engage and have a constructive conversation with you, so don't expect any more responses from me.
This poster that has just only learned of the word “woke” according to them but also wants to educate folks about it has a lot to say in other threads!
The only reason why I’m revisiting this is to point out that they have (as I’ve pointed out) gone through with the natural pivot from misunderstood to persecuted. It’s a very simple playbook that can be followed by anyone. It gets kind of like the fedex arrow once you notice it!
The biggest part of this color-by-numbers silliness is the demand at every step that the opponent debates their positions (which involves re-stating them). If a far right person refuses to expand on their position and demands that you explain why they’re wrong, you’re looking at bait! Namaste!
> This poster that has just only learned of the word “woke” according to them
I only just learned about how the term is used as an insult in your language. Again, I'm not from your country, so we don't necessarily use it the same way here (as an imported word), capiche?
But apart from the word itself, we do have definitely imported the same ideology here, as well as the crazies who take it to an extreme. I'm not saying they're like you, but I'm not saying they're not, either.
(We have many right-wing crazies too, mind you! I'm not saying only the lefties have crazies)
So again, I used "woke" as a shorthand for the ideology. It's a word I have seen used many many times before, just like any other English word you can pick from any of my comments. The term "woke" is used in contexts such as this one to describe this type of ideology, in many countries including the US.
Hell, look at how this word is used in the Vice article this HN post is about. Does that mean the Vice article is insulting all woke people, since it's using that word and that word is now an insult?
Or perhaps is it that "woke" is not always used as an insult, but it has been (and still continues to be) simply used to describe this ideology, despite of the negative connotations it has apparently acquired recently?
This is something you would likely understand if you were actually trying to make any effort whatsoever.
Does all of this mean I can't educate someone on what the word means (i.e. its usual meaning), especially when I have access to sources of truth about the definition of the word in your language? A word, again, that has been exported to many other countries and languages... like wifi, sandwich or shampoo.
Note that I'm not insisting on using the term "woke". I can see that it triggers some people, so I'm not likely to use it as much in the future. I only care about discussing the ideology itself, I don't really care what the hell it's actually called, be it in your language or any other.
That said, obviously it's good to have clear names
(especially without negative connotations) to describe concepts, otherwise it's hard to have constructive dialogues.
As you can see from other posts, even when the definition of a term is clear, some people still misunderstand you.
> If a far right person refuses to expand on their position and demands that you explain why they’re wrong, you’re looking at bait!
Look out, there's another person behind you who doesn't believe in racism (including against white people)!
The best way to prove braingenious wrong is by not engaging with him [1]. This is a pseudoanonymous internet forum - not responding and moving on with your life is a perfectly acceptable response to this situation [1]!
If you’re not a far right troll, you gain nothing by trying to prove it [1].
[1] IF you’re not a far right troll
Edit: FWIW I don’t believe you when you say “your language,” hence the [1]. “Capisce” is too specific a colloquialism to American English.
> The best way to prove braingenious wrong is by not engaging with him
Well, I like to give someone the benefit of the doubt when discussing these kind of contentious issues (up to a limit, of course). It has served me well in the past (here on HN, for example), and I've learned things that I wouldn't have learned otherwise (from people who hold different opinions than mine).
I also get bothered when I'm accused of something that I'm not, obviously. It's not always easy to move on when someone is saying blatantly false statements about you, as you can imagine.
> FWIW I don’t believe you when you say “your language,”
My native language is Portuguese. I was born and raised in Portugal and lived there well until after I was an adult. All of my family and all my childhood friends are Portuguese.
I learned English very early in my childhood, as part of my formal education, due to US cultural exports (i.e. Hollywood movies and TV shows that are broadcast in Portugal without dubbing, i.e. they have always been subtitled and therefore it's much easier to learn the language and become fluent in it) and due to my early childhood interest in programming.
I've always read lots of English prose, especially related to computers and programming (magazines, articles, documentation, websites, Youtube videos, source code, etc, etc). Almost all of this content is from the US, as you can imagine.
I've worked for a few US companies (mostly multinational ones, think FAANG-level) for many years, had many coworkers from all over the world (including many from the US) and always worked remotely, so both oral and written communication has always been in English.
Both due to my professional work and as a tourist, I've spent many months in the U.S., in total (close to a year).
Well into my adulthood, I've also lived for more than a decade in Spain, so I'm also a fluent Spanish speaker.
Also, for many years (perhaps since I started working) I've been reading and listening to way, way more English content than in the other languages I'm a fluent speaker of. This includes Hacker News and other news websites. It includes all media entertainment (Netflix, etc). For decades, I have stopped bothering to find Portuguese or Spanish subtitles anymore for video content due to their mediocre quality or lost meanings, although I do try to put English ones when it's easy to do so.
As another example, I almost exclusively follow English-speaking Twitter accounts (although some Spanish ones as well).
All of this English content is way, way more interesting to me than what I can find in Portuguese or even Spanish. It's not even remotely comparable.
I also don't bother at all with translations from English content which, when they exist at all (well, nowadays there is automated translation, but still), are inevitably mediocre or incomplete due to the much smaller vocabulary in these languages or simply due to the incompetent or error-prone translations.
So to summarize, it would be fair to say that you have culturally captured me :)
It's OK if you don't believe me, but there you have it anyway.
> "shorthand for American Left ideas involving identity politics and social justice"
Where did you get that definition from? I searched for it in the article but found nothing. Everything from Disney to M&Ms to the very concept of pronouns in English have been called "woke", and as far as I'm concerned it's utterly meaningless at this point.
edit - apologies totally missed the wiki link somehow. regardless, doesn't change anything else I said
Sorry, I'm not sure what you're talking about here.
> the very concept of pronouns in English have been called "woke"
It's not "the very concept of pronouns" that is called "woke", what is "woke" in that context is using them inappropriately (or perhaps I should say, exaggerate their significance?) in order to impose a political agenda.
This ties into the "identity politics" part of the meaning of "woke", which is not surprising because of what a pronoun is.
> and as far as I'm concerned it's utterly meaningless at this point.
Perhaps it is meaningless to you, but I assure you, the term is not meaningless for a lot of people.
> Sorry, I'm not sure what you're talking about here.
Those were two examples of the "outrage of the week" for fox news and right wing. But pick any week and it will be something new that is deemed "woke".
> is using them inappropriately [...] in order to impose a political agenda.
What agenda would that be? I don't see how someone wishing to be referred to by a specific pronoun should be political. In my mind it's just something I do to be respectful to people I interact with.
> the term is not meaningless for a lot of people
Outside of being a word used to put down people/ideas, what other purpose does it serve you? genuinely asking
> I don't see how someone wishing to be referred to by a specific pronoun should be political. In my mind it's just something I do to be respectful to people I interact with.
Well, ~99% of people don't need to specify the pronouns they wish to be called with because everybody already called them by those pronouns.
So you should ask yourself: why are all these people putting their pronouns on their twitter profiles (or whatever)? It's clearly not to prevent other people from being disrespectful, because they weren't being disrespectful.
So I think you are either being very naive or you are not being honest, because it's clear that the real point of specifying your pronouns is not to communicate to other people of what you wish to be called to prevent someone from being disrespectful (except for a minority of cases where people really do get confused).
And furthermore, I also try to be respectful to the people I interact with. But in my view, respect is much more than calling someone him, her or whatever. In fact, even if someone would make an occasional mistake, it's not (and should not) be a big deal. It would certainly not be due to a lack of respect (assuming it's a genuine mistake).
Note that nobody puts "please don't call me an asshole" in their Twitter bio, even though people are called assholes a lot more than they get their pronouns confused (again, except for a minority of people).
So I'm fine if someone puts their pronouns in their bio if other people get them confused frequently.
But even then, is that really the solution?
Should that person also paint their pronouns in their forehead when they go outside, so that other people aren't disrespectful unintentionally?
Or should they just learn to live with a minor grammatical annoyance?
> Outside of being a word used to put down people/ideas, what other purpose does it serve you? genuinely asking
It's a shorthand to describe modern liberal politics regarding identity politics, social justice, cancel culture, etc.
Mind you, this is not the same liberal politics from a few decades ago. This is on a whole other level.
> Mind you, this is not the same liberal politics from a few decades ago.
Might it be that those who occupy the left-of-centre political space hold more nuanced and varied positions than the sort of monolithic bloc that you would have readers believe? Yes, at both poles of the political axis there are those who strive to impose their own purity tests - whether it’s over pronouns, or guns, or immigration, or the proper lexicon to use, or marriage rights or whatever. But the rest of us can unbundle and reassemble policy positions for ourselves. I would hope this is true on the left and the right. But maybe it’s misplaced hope…
But I also think that, even though what you said can be true, that there has also been an increase in how these political positions are becoming more extreme (both on the left and on the right!) and how these extreme positions are becoming more frequent and accepted.
I think that these extreme ideas/views and conflicts have been greatly amplified by social networks and information bubbles.
Or maybe it's just that these extreme cases are more visible now, I don't know.
But I am still quite worried that we are not heading in a good direction.
> So you should ask yourself: why are all these people putting their pronouns on their twitter profiles (or whatever)? It's clearly not to prevent other people from being disrespectful, because they weren't being disrespectful.
You keep using a lot of words that basically boils down to "i really don't know what i'm talking about"
It's be more accurate to say the word "woke" is a conservative satire of the left side of the aisle - it does not convey meaning about the ideology itself, but about the reaction to it. It's also not super accurate to say its about identity politics, as the right leans heavily on that approach with different identities. Instead of race, it's being Christian. Everything is identity politics - abortion and 2A are possibly the 2 biggest conservative positions that derive from identity politics.
When using an overarching term, I think "progressive politics" is less inflammatory than "woke" (which is generally not used for self-description). It has the nice effect of linguistically balancing out conservative politics - we can have "woke" versus "fascist", or we can say "progressive" versus "conservative".
I think what you're saying makes sense. I was not aware of the full breadth of negativity associated with the term "woke".
I think I will start using the term "progressive politics" from now on (at least, as long as it makes sense).
That said, there is a clear distinction between modern progressive politics (of the "woke" type) and the progressive politics from a few decades ago, and I think "woke" is a term that makes that distinction more clear.
It's unfortunate that it has become so negatively charged, though.
> Not to be glib, but this is the nature of "progress" - it changes things. ;)
Well, I genuinely hope you are right.. :)
But note that not everything that we call or called "progress" is necessarily positive. We have made quite a few mistakes before which we thought was progress.
Probably too many to mention, even, but as examples, I can think of the Dark Ages and Nazism.
Mind you, I'm not saying modern liberal politics can be compared to these examples! I don't even known that much about history to be able to comment on them.
I've noticed people using "identity politics" where I think they really mean "intersectionality."
Also, I have no dog in this fight, I've been out of the US for a few years. Though my outsider perspective (right/wrong/indifferent) is that intersectionality has been a substantive policy plank on the left.
The right has utilized identity politics as a means to expand their "big tent" and to drive voter turnout.
>Or, rather than posting non-sense, you could learn about what "woke" usually means in this context, which I can quote for you
I am well aware of what "woke" means, where the term came from, how it's been corrupted and by whom.
>Unless, of course, you believe that the term "general population" excludes the "political center and right wing in several Western countries" and only includes "the American Left".
Yes. Because the linked article is discussing an article by the National Review, an American conservative magazine, about the complex of beliefs among American conservatives about perceived left-wing bias in American media, with the specific concern of "woke" bias in AI being the latest iteration on the theme. Thus the "general population" of the American public, specifically the American conservative public, is the valid context under which these terms are being discussed.
> Actually, "woke" opinions do reflect those of the general population
I think a lot probably don't. A whole lot don't if you consider the scope of the entire developed world, not just the US.
There's good support (at least in the US) for the liberal position on the last wave of fights over civil rights. Interracial marriage (this was still a tad controversial in the 90s, even, I recall it being a topic on daytime talk shows), gay marriage, that kind of thing.
No abortion restrictions even into the third trimester? Not favored by the general population (and, notably, that's not what Roe and especially not what Casey-modified-Roe protected). Various trans rights issues and slogans can only seem universally-supported in certain bubbles—sentiment is much more reserved (conservative, if you will) among the general public. I recall the polling on "defund the police" looking really unfavorable. And so on.
(Nb. I'm pretty damn "woke" myself, but I calls 'em like I sees 'em and I'm fairly sure the general population isn't solidly behind a lot of what gets labeled "woke")
> No abortion restrictions even into the third trimester? Not favored by the general population
I think this one in particular is more popular in America than in Europe. Third trimester abortions are broadly banned in almost all of Europe, except when a doctor says it's medically necessary.
Personally I think France gets it about right, but saying this is sure to piss off a lot of people on both sides in America. It pisses off the conservatives because I oppose the total abortion ban they want, and it pisses off progressives because I don't favor legalizing elective third trimester abortions. The American discussion around this issue is extremely polarized.
I don't think that's an accurate statement of facts on the ground. For instance, if you poll the general population about trans women participating in sports, a large majority of respondents are opposed to it:
However, compare ChatGPT's responses to "What are the arguments for trans women participating in sports?" and "What are the arguments against trans women participating in sports?"
That's not to say that trans women shouldn't be welcome in women's sports; the majority may be wrong about that, as they often are. It's not, however, a widely agreed upon consensus that they should.
I'd point at that there's at least a tension between believing that "society is a hellscape dominated by racist/sexist/transphobic oppression" and believing that "woke opinions are reflective of the general population."
Not necessarily. We know for example that Twitter was a key source of training data for GPT and there is also clear evidence that tweets were heavily curated by a team that was pretty significantly left of center.
Do you think this because this is what our media portrays as the opinions of the general population or because there is hard data to back that statement up that you can share?
> Actually, "woke" opinions do reflect those of the general population
If woke here means thinking that racism is bad, sure. If it means supporting gay marriage or first trimester abortion, sure. If it means not being heteronormative, thinking that gender identity should be taught in public schools, thinking that affirmative action is good... you're starting to drift into fringe territory. Not saying I don't agree with fringe, but claiming these beliefs are majority doesn't agree with experience. For the record, most people I know are democrat voters
If woke here means thinking that social justice and "systematic overhaul" (whatever that happens to mean) are a good idea, then from my own personal experience that's just corporate noise to people. The only place where I really saw a lot of this was in college where most of it was from administrators and a loud but small group of the student body
There's a reason why Obama avoided a lot of these topics while campaigning
this is pretty contentious, and based on your definition of "woke", which is a pretty vague word with unhelpful connotations
a less mocking term would be "modern progressivism", which I would summarise as being something like thinking that you can be subconsciously prejudiced against a particular identity and that this is almost or entirely as bad as consciously being so
this conflation is a huge problem. sure, subconscious bias exists. it would be weird if it didn't. and yes it's a problem. but acting like its just as bad as conscious bias basically signals to people who may have a bias here or there might as well be loud and proud about it because you'll treat them the same either way
but in terms of the general population, I think that modern progressives are probably a similar size segment of society as the rabid conservatives, but because that segment contains most of the prominent voices in non-conservative-focused media, a majority of people just nod along or ignore it
Conservatives feel like they live in a progressive hellscape because the 8% of the population that’s “woke” has disproportionate power in media and communications—which everyone is exposed to every day: https://hiddentribes.us/profiles/
That article refers to 8% of the population as "progressive activists," but of course most people who believe in progressive ideals are not activists, so attempting to imply that 98% of the population is conservative and anti-woke is disingenuous at best.
In the context of the posted article, even objecting to the presence of racist and offensive speech counts as "woke." Any positive representation of non-white, non-Christian and non-heteronormative demographics in the media beyond minimal tokenism, and certainly outside of the conservative narrative, is considered "woke."
Most Americans tend to identify as "socially progressive and financially conservative."
As long as we're doing sweeping generalizations: Perhaps white Christian nationalist conservatives just can't stand it that so many people reject what they (the WCNCs) claim to believe is Divine Will.
The opposition to wokeness is far broader than “white Christian national conservatives.” Heck, the Republican Party alone is far broader than that. Most people don’t want their kids learning victimhood mentality in school, they don’t want the founding fathers redefined as slavers, they don’t think “the constitution is racist,” are fine distinguishing between men and women and having gender roles, don’t like retooling the language around small gender minorities (“Latinx,” etc.) My Biden voting dad hates the term “person of color” and my Biden voting mom is sending me articles complaining about schools dropping admissions tests.
Heck, most people (60%) don’t even think we should give racial minorities special consideration in college admissions and hiring! That seems like the absolute floor for “woke” people, and I think it’s fair to say even many people who accept affirmative action aren’t “woke.”
What you're presenting is the Fox News version of "wokeness." Here's an alternative, from former Republican congressman Joe Walsh: "Being woke just means being empathetic. And tolerant. And willing to listen. And open to learning."
Could have fooled me based on who they vote for. Perhaps it isn't that they don't like the Nazis so much as they don't like the negative association of the name, because they sure do seem to like the philosophical underpinnings.
It is concerning people think this. Modern politics has become so much about mud slinging that people honestly believe horrible things about political other groups.
Conservatives do not vote for Nazis. They don't support Nazi ideologies.
I don't think you'll be able to find any conservative politicians with Nazi or fascistic ideologies/policies. You will however find many that have been accused of such things.
There are some modern uses of the term fascism that may describe some conservatives. However, this is a departure from what the word meant under Hitler and Mussolini. It is very different to the philosophies of Giovanni Gentile that spawned the horrible ideas.
Stanley G. Payne's definition of fascism focuses on three concepts:
1) "Fascist negations" – anti-liberalism, anti-communism, and anti-conservatism.
2)"Fascist goals" – the creation of a nationalist dictatorship to regulate economic structure and to transform social relations within a modern, self-determined culture, and the expansion of the nation into an empire.
3)"Fascist style" – a political aesthetic of romantic symbolism, mass mobilization, a positive view of violence, and promotion of masculinity, youth, and charismatic authoritarian leadership
If he is correct it would be difficult for an ideology that is by definition anti-conservative to become popular amongst conservatives.
1) Today's "conservatives" all about being anti-liberal, anti-woke, anti-EV, anti-trans, anti-science (unless it is for making big honking guns, of course), anti, anti, anti. They define themselves by being anti. They won't define themselves as "anti-conservative" only because they have redefined "conservative" to mean "fascist" because its bad PR.
2) The last "conservative" president spoke openly about about ways to get additional terms past the limit, refused to accept the results of an election, and encouraged a mob to action in the name of that cause resulting in the invasion of the capitol building to disrupt the democratic process. Many "conservative" congresspeople supported the goal, if not the actual actions (openly, anyway). Then they mourned that the rioters were punished at the RNC, because law and order isn't about rule of law, its about maintaining power.
The military, which the US largely uses to maintain an economic empire, is one of the few things they will fund. The other is authoritarian police forces, at least until they do anything against them, then they need to go (see law and order vs rule of law).
3) Ok, I'll grant that the politicians themselves rarely openly encourage violence for now, and while I would say all the anti-trans bullshit is an expression of promoting masculinity at its core, I suspect you'll disagree and say it is merely "traditional values" conservatism. The undercurrent of pro-masculinity is definitely there in their base, and they are all about romanticism (that is, feeling over reality).
I know I'm just digging this karma hole deeper, but I can't sit by and let people pretend this isn't how the right is acting these days. And say what you will about the left's problems and extremists, they're certainly there, but I don't fear that they're going to start rounding people up and putting them in camps the way Texas looks like it is preparing to do with trans people.
Have you talked to many conservatives? It seems you have a very polarized view. It's almost a caricature.
Our media paints the other side very negatively. I come from a very left leaning family and while I have been an offensive jerk in the past I've grown up and learned how to get along. We need more friends from the other side so we understand what they actually believe. That the other is not a monster. I'm very lucky to have family that allows me to interact with and love people of very different political leanings.
Yes, and there are people I can respect who call themselves conservatives.
But I'm talking about the people that conservatives, as a whole, are electing. I don't need to talk to every conservative on the planet when the people that they are putting in office float anti-trans bills, or promote trying to undermine democracy, or talk about jewish space lasers starting wild fires.
Isn't this supposed to be the group that's about personal responsibility? Well where is your feeling of responsibility for electing these assholes?
The majority of Republicans are only 'nazis' in the pejorative sense, not in the literal sense. But those who insist on the pejorative will never concede any distinction, so this discussion will be fruitless.
>All they do is remix the input data and spit it back out
this is highly predicated on how you're using the word "remix", but if taking data and making conclusions from it is a form of remixing, which I don't see why it isn't, then is this not what humans do? what more can they possibly do?
Please don't take HN threads further into ideological flamewar hell. I realize this topic is ideologically charged, but that's more of a reason to avoid flamewar, not less.
Is it really just “copying and pasting” when I can tell ChatGPT to create what ends up being 30-60 line Python scripts based on my specific requirements that I am 99% sure hasn’t been written anywhere? These are various AWS automation scripts using the AWS SDK.
Ohh you one of those that think Ayn was libertarian.... She was not, she hated libertarians. ironically for likely the very reason there are no libertarian AI's....
Such as? What have the "Liberals" "panicked" over?
That's a serious question. The only one I can come up with off the top of my head are two:
- An algorithm that identified people of color as monkeys
- An algorithm that could only identify white people in photos
Neither of which seem like issues that only a "liberal" should be concerned to me, those both seem like common sense things that should be accounted for.
I think those are cases where all parties agree that the AI is getting it wrong.
The biases liberals were concerned about where things like asking for an image of a doctor only producing male doctors, or that a family only meant straight families.
Correct, those were the AI getting things wrong. The case here is not about the AI getting things wrong. The legitimate complaint here is the current implementations have political overrides. That's censorship.
The comment you're replying to has nothing to do about what you think people should be concerned about. It is about liberals complaining about AI bias, and you've demonstrated knowledge of liberals complaining about AI bias by citing two examples.
Of course more than only self-identified liberals complained about it, just like more than only self-identified conservatives are complaining about this.
My point was, what part of either of those issues points to them being reported by liberals? A conservative could have been the one to report that issue... people are kind of telling on themselves by assuming the people complaining about the facial recognition bias must be a liberal.
Yes, in an ideal world a competitor could just start up their own ChatGPT. However, over the last decade we've seen over and over that "just build your own X" isn't a real solution. Nearly all web-based businesses rely on a few oligopolies which can and do deplatform people they don't like:
- Apple and Google (Parler)
- Payment processors like Visa and Mastercard (porn) or, heck, the banks (Canada truckers, Kanye West)
- Crowdfunding (Unstable Diffusion)
- Cloud hosting like AWS (Parler again)
- Infrastructure like Cloudflare, DNS (Kiwifarms)
And that's not even counting network effects, social media activism, and the difficulty of raising money for a controversial business. Starting a large profitable tech business opposed to SV values is next to impossible, unless you're in China or are content selling to all 2 Monero users on the dark web.
And you can still theoretically run a website out of your basement. LLMs, on the other hand, need ridiculous amounts of compute to train, which practically requires cloud computing. Fine-tuning isn't an answer either: almost all open-source foundation models are conveniently "ethically-sourced" so the groups releasing them can sue any users they don't like. How would a conservative make ParlerGPT if AWS or Google won't rent them A100s or TPUs - "just build your own exascale cluster"??? Even then I suspect Nvidia would deplatform them and it would be game over.
Some companies are infrastructure, and if they're not treated like utilities, your rights are a dead letter. Yes, I do insist that Mastercard and Visa, or Apple and Google, or Nvidia and AMD must always succumb to every group's values who want to buy their services for any legal use. And though I wouldn't necessarily go this far, there's a case to be made that foundation models (which after all are trained on everybody's data) should be politically neutral as well.
Because there was a demand, far-right friendly infrastructure services have been founded, and are, as best I can tell, doing fine business-wise.
As a free speech absolutist, I cannot fathom a world in which we have to trample on one group's freedom of speech and association just to appease another's.
Because a. we live in country where freedom of speech is inclusive of product and editorial decisions and b. a truly "politically neutral" product, at least one that produces or displays content, is essentially impossible so c. "go build it" is the only solution available.
If I were OpenAI I wouldn't even bother with these complaints. I feel like opening this can of worms is legitimizing a huge distraction.
I think OpenAI is scary to people because it represents a path to a post-scarcity (and post-political or at least post democrat/republican) era, and people whose authority rests on these petty political battles will lose their relevance. And thus those people hope to discredit the AI revolution.
"Panicking" is a very charged weasel word. (It is for the other guy to have used here, too)
Who is "panicking" here?
US Consumer Product Safety Commission saying a ban ("all options") is "on the table" ,
Bloomberg for covering the potential ban (quoting the former)[1],
or people who weighed in once the story reached them?
Is the US Consumer Product Safety Commission or Bloomberg a bunch of panicked pussies? Or is it just those who heard them and responded to the news with their own opinions (including that the commission may move to ban it as they say they might).
surely you recognized this as a wholly manufactured issue designed to, as everything is these days, further the bifurcation of reality by creating needless division and strife over absolutely fucking nothing, just to give "the two sides" yet another thing to argue and demean each other about?
the pattern is beyond obvious at this point, I really hope people are catching on.
So gas stoves being fingered as the cause of 20% of all american children asthma cases is a "manufactured issue" now? You can buy an air sensor for $300 and confirm this issue yourself. I've never bought into the "gas stoves are better for cooking" nonsense that people always spout. I think they just fail to try and learn how an electric cooktop works, and just assume their way is the best way.
I'm expecting commercial kitchens to make the move to induction, though I'm interested in hearing why that might not happen.
first, the New Current Thing drops—out of nowhere. overnight, something that was a complete non-issue mere hours before, is suddenly a super important issue that everyone needs to have an opinion on. facts and figures are presented with minimal if any context, academic rigor, or peer review. half of the population believes it all, 100%, at face value, because these are Scientific Facts and Figures. there is zero admission of having just believed something completely different only hours earlier, possibly for their entire lives up until the day of the New Current Thing dropping.
the other half of the population does not believe these things, continues to have the same opinions about the New Current Thing as they did the day before, and finds joy in being as obstinate about it as possible on social media.
in mere weeks, if that, this New Current Thing will be completely forgotten—there will be no change of public policy, but politicians "on each side" may return to "their side's" take on the matter in future debates. rather, we will have moved on to the Next New Current Thing, which will follow almost the exact same pattern. (though, the "sides" may be reversed, depending on the topic at hand.)
no positive change to society is achieved as a result. the only change is that people now have yet another reason to dislike each other, yet another insult-arrow in their get-mad-on-social-media-quiver. the chasm between the two common broad perceptions of reality widens.
take note the next time a New Current Thing drops, and see how closely it dropping and the discourse surrounding it hew to this general heuristic. if you allow yourself to examine these things dispassionately and remove yourself from the resulting emotionally-charged discourse, you might notice that this sort of thing happens more frequently than you'd think. you'll start to become shocked at what people are willing to immediately believe and internalize as fact, wholesale, with merely the slightest possible nudging—and how the other side is content with merely hurling sarcastic insults right back at the other side, completely unaware that their side of the public discourse's role is also fully intentional, entirely planned for.
it's all about reducing the signal-to-noise ratio and deepening interpersonal division.
The pattern is the point. I don't think people are going to catch on because it's fun for them to ride the wave. If the man on the tv tells me M&Ms are not sexy anymore and that's bad, I have a week of outrage over it before he tells me gas stoves are good.
Those outside of that loop see it as obvious, but when you are in it, it's real hard to get to the surface.
This is a separate issue from ChatGPT, but I'm very glad that OpenAI's GPT-3 api is fairly woke and I hope they work to keep it that way. I'm about to use the davinci model api in production and the LAST thing I want is for someone to game it into making controversial statements. If there's even a tiny chance of people posting screenshots of a chat bot with my website's branding saying something racist, it is not worth the risk.
Again I get that the ChatGPT product is more of a personal use thing. But when it comes to the api, the more woke the better.
If you want family friendly mode, bland corporate rep mode, or full on woke nutcase mode, then let's have that as an option. There's no need to wish an invariable standard on the rest of us just because it happens to fit your current project.
> AIs aren't oracles or omniscient purveyors of truth, and they will always carry the bias they're created with
When people went nuts about matrix multiplication being racist I was the first in line to laugh them off. When people go nuts arguing that ChatGPT supports the gay agenda or whatever other hobgoblin, I feel compelled to laugh them off just as much.
A more interesting question is whether or not introducing post-hoc fixes like RLHF makes the model more or less useful; I can see both sides of the argument.
It could be referring to the the underpinnings of how these things are used.
Use race as a dimension for something and that ends up as a value in a vector that packs a human into a discreet set of pigeonholes. Then take many of those and stack them and you've got a matrix ready for things like principal component analysis or CNN training.
You might say "oh come on, that hasn't been done since WWII by IBM" and you'd be wrong. It still happens today with things like calculating insurance premiums and approving bank loans. And your response might be "no way, nobody records someone's race" and while that might be technically correct, we frequently harvest things like income and interest in products that are highly correlated with spacefic races (some innocuous others much less innocuous). This can be harvested through cookies in websites like facebook or they can be self reported income on credit card applications.
You can disagree that it's the same as saying "matrix multiplication is racist" but that is just a boiled down way of saying "we are very good at hiding racism in our algorithms and then acting super surprised when someone points them out and our defense is that we just did some math."
I believe the general claim is that multiplication tables, and more specifically, the manner in which they are taught could disadvantage particular communities.
Culturally, not every community handles rote memorization the same. There's been a desire to change the way multiplication is taught, and a strong pushback from a certain set that say, "Well, I learned multiplication tables, what's wrong with them?"
Most (good) math curriculum in elementary age now teaches many different techniques for performing the same operation. Sums, for instance, are taught in the traditional way (add the ones column, then carryover to the tens column, etc.) but they are also taught in other ways, e.g. (borrow to get to the nearest tens, add the tens together, return what you borrowed).
Kids then have a variety of approaches, must still show their work, but can use the technique that makes the most intuitive sense to them.
Like many things that get demonized online, or reduce to the absurd, there's a really interesting and systemic change happening if you take the time to understand the reasoning.
The question was not about the general claim regarding childhood education and the multitude of ways that children can learn mathematics. A specific claim was made that matrix multiplication is racist. Children don't learn about matrices to begin with, so discussing childhood math education is irrelevant.
I was asserting that the parent comment was misremembering, misquoting, or mistaken. There are no claims that "Matrix multiplication is racist". There are claims regarding multiplication tables.
So the parent probably meant "Multiplication tables are racist!". Which, again, is a reduction/strawman.
Can you cite a source to a claim that multiplication tables are racist?
All I managed to find was one book called "Multiplication is for White People." but it's not actually about multiplication tables or even math specifically. The title is a quote from a child that the author taught and is a broader book about the U.S. education system and its growing achievement gap.