Hacker News new | past | comments | ask | show | jobs | submit | hdhdhsjsbdh's comments login

Evidence can probably be given one way or another whether or not DEI impacts performance of an operation (my hypothesis is that it doesn’t and it’s just another boogeyman). All it takes is research. Though even if it’s rigorously demonstrated one way or another, few will care. The guessers are in charge.

You’re just downright uninformed then. See, e.g. https://en.m.wikipedia.org/wiki/Kathoey Perhaps you consider it a fad because increasing visibility in Western culture/resulting culture war pushback has just now brought it to your personal attention. Just because you haven’t personally noticed them doesn’t mean trans people haven’t existed for hundreds of years in different capacities all over the globe.


These are not trans people like you see in the anglosphere.

The same way Alexander the Great or Telemachus or Sappho or any other ancient Mediterranean were not gay in the modern sense.

Western culture's version of trans/gay/lesbian/LGBTQ+ (awful term imo) is an artifact of that culture. And just because we look and act the same as others does not mean we are. It's like lumping the Middle East into one category -- when some inside wouldn't even consider themselves a part of it.

E.g. just because I enjoy sucking dick does not mean I'm part of the [insert country that I have nothing in common with]'s gay community. And frankly I don't want to be lumped in with it.


I think it's really close-minded to stereotype millions of trans people and treat them like a monolith. Culture shapes behavior sure, but trans people globally operate under similar constraints. Whether or not you consider yourself part of the global or <insert country> LGBTQ+ community is pretty irrelevant to bigots. There's no way for you to appease or become "respectable" in their eyes, even if you throw trans people under the bus. You're always gonna be a f*g to them.


> but trans people globally operate under similar constraints

Wrong. Absolutely not. There is nothing similar between the lives of a trans person in the Middle East or South Asia and those in the West.


> There is nothing similar between the lives of a trans person in the Middle East or South Asia and those in the West.

I don't have first hand knowledge, but actual trans people I've heard from who have lived in both South Asia and the West and are in community with trans people on both sides of that divide do not echo that sentiment, and particularly this upthread idea, "These are not trans people like you see in the anglosphere", has been attributed to a combination of a extremely similar (but with different local names used) process of third-sexing in both South Asian and anglo cultures plus orientalism applied by anglo observers to South Asian cultures and the third-sexing going on within it.


Perhaps your version of bigots, but I can handle the ones I interact with much better personally with a different approach than -- paraphrasing -- "fuck all bigots."

I think the absolutionist path is wrong. I don't care about the respect or acknowledgment of the monolith of the "other." I do care about the people in my community and those I interact with on a regular basis. In that vein, I have found it useful to not be so inflexible about things. The majority of people are open to getting along if you get along with them. The rest are either working from a memory of bad experiences or have just simply been surrounded too long in a dogma they've adopted as their own. But these are not intractable problems.

What is an almost intractable problem is having all of my efforts be made moot, when a bunch of hell-raisers with nothing else on their mind besides "me and my problems" decide to make noise and drive even greater decisiveness.

I would like for Western individualism and cultural imperialism and disintegration to leave me and my people the fuck alone. I don't need your help or your ideas or to be saved from my foolishness -- thank you -- I'm doing well with my own devices.


Beyond its immediate appeal to the (somewhat cringy imo) “uncensored model” crowd, this has immediate practical use for improving data synthesis. I have had several experiences trying to create synthetic data for harmless or benign tasks, only to have noise introduced from overly conservative refusals.


I agree -- people often hear "uncensored model" and immediately jump to all sorts of places, but there are very practical use-cases that benefit from unhindered models.

In my case, we're attempting to use multi-modal models essentially for NSFW-detection with quantified degrees of understanding about the subjects in question (for a research paper involving historical classic art). Model censorship tends to not want to let us ask _any_ questions about such subject matter, and it has greatly limited the choice of models that we can use.

Being able to easily turn censorship off for local language models would be a great boost to our workflow, and we might not have to tiptoe around the prompt engine so carefully.


I encountered this in an absurd context — I wanted a model (IIRC GPT 3.5) to make me some invalid UTF-8 strings. It refused! On safety grounds! After a couple minutes of fiddling, the refusal was surprisingly robust, although I admit I didn’t try litany of the usual model jailbreaking techniques.

On the one hand, good job OpenAI for training the model decently robustly. On the other hand, this entirely misses the point of “AI safety”.


Reminds me of this nugget of Prime reacting to Gemini refusing to show C++ code to teenagers because it is "unsafe":

https://www.youtube.com/watch?v=r2npdV6tX1g


I commute solely by cycling every weekday (15 min each way, with some hills) and I’m still depressed. Must be more to the puzzle than that!


Of course there’s more to it than that; but, it’s been long known that exercise helps curtail symptoms associated with anxiety and depression.

Due to life changes that are devastating, I’m walking/running 8-12 miles a day and swimming 3-5. I still have severe anxiety (along with medication and therapy), but the days where I cannot get to the gym or out are FAR worse than those that I do.

30 mins of biking probably isn’t enough exercise to get to the threshold required to reduce your levels. While everyone is different, I need to hit the 45 min mark to feel any sort of benefit.


Fair point! 30 min is probably not enough. When I was able to maintain a routine similar to your own, the baseline was indeed better (and the days without were much worse). Best of luck in finding/maintaining the right balance for you.


That a rule-based system is able to outperform a generalist model on a highly-specialized task is not exactly surprising.

What is missing from the discussion (and this is no fault of the paper—this isn’t their focus) is how much cost and effort goes into building the specialist system, vs. using a generalist model.


Cost of building is higher. Cost of data processing probably much lower.


In the past year or so, arxiv has become more of an advertising platform than a scientific resource. The title of this “paper” makes that quite clear: their company URL is right there!


I’m on Bluesky and tried to get my friends on there too, but without much luck. I don’t find myself using it often.

In my opinion, the problem with Bluesky is that it assumes people want old Twitter/social media back. Beyond their protocols (which the vast majority of social media users do not care about), there is little innovation in the UX or platform. Old Twitter had its moment, and was relevant for a time. But the ship of the last 20ish years of social media has sailed, and people either want something new and different, or they’re ready for it to go away entirely.


I thought was only me. Right from the beginning I felt stale smell.

I think this model is kind of of date and for what is worth we have way too many social networks anyway.


I am in a similar predicament. I tried Threads and that had some traction due to the built-in instagram connection but it doesn't take long to see that the Instagram and Twitter audiences are very different. Bluesky is starting from scratch which is very unattractive to anyone who's built an audience on Twitter from the past 10 years or more even if it seems X (Twitter) is mostly bots these days.

From what I can see so far on Bluesky, it seems like a journalist circle jerk.


> From what I can see so far on Bluesky, it seems like a journalist circle jerk.

Out of all the circle jerks out there, I do not find the fourth estate very annoying.

The C-Suite circle jerk seems far worse. Unlike all other platforms aside from Mastodon, Bluesky is devoid of that top-down control, and I really appreciate that.


Probably the myopic and philosophically illiterate tech worshippers that would bring such a thing into existence in the first place.


Ave deus mechanicus


"At long last, we have created the torment nexus from classic sci-fi novel Don't Create The Torment Nexus"


> Becker believed that when companies have a big profit cushion … they have the latitude to indulge the personal biases of their managers … At Google in the 2020s, it means creating AI apps that refuse to draw White people in Hitler’s army.

I’m not so sure about this bit of analysis. It falls in line with the (somewhat conspiratorial) view that these organizations have some kind of woke agenda that they want to push to the masses. A far simpler explanation is that the managers have no personal agenda other than to mitigate the risk of public meltdowns like Tay [0], which now serve as PR case studies in what can happen when these types of systems are open to the public. It’s not a matter of some manager trying to erase white people—just a sloppy attempt to mitigate the risk of a flood of articles about “Google’s New Racist AI”. Since we don’t have technical solutions to these problems, however, we just swing the pendulum in the other direction; hence, the current flood of articles about “Google’s New Reverse-Racist AI”.

You can’t win, really. As soon as you put something like this into the public, you have people doing ideological pen-testing from every angle. And there is a dearth of technical solutions to this problem.

[0] https://en.m.wikipedia.org/wiki/Tay_(chatbot)


And to emphasize your point, it seems from the coverage like the problem isn’t so much that overbearing managers are setting out to nanny the users as it is that they’re approaching this task in kind of a lazy, box-ticking kind of way rather than bringing serious thought and engineering creativity to bear.

It’s not “let’s re-vet all the training inputs to reflect the just society of our dreams,” it’s “yeah, the AI Safety people say make it diverse, just put ‘make it diverse’ in the prompt.” Which, I mean, if your main thing is getting this wild new product baked and out the door, I can see that kind of mandate competing with a lot of more existential priorities for the product.

Then again it wouldn’t surprise me if they were up against the limits of the technology a bit too: it seems plausible that getting too in-the-weeds with your “represent diversity sensitively” system prompt could quickly start to impair the tool’s overall quality.


I tend to think you're right, but I think you're being much too quick to rule out that the model isn't trained with some of this stuff baked in. For example, I think it's very plausible that "diverse" has been trained into the model to essentially mean "non-white", and then when the prompt was injected to specify "diverse" or "diversity" the model knew that meant non-white.


> they’re approaching this task in kind of a lazy, box-ticking kind of way rather than bringing serious thought and engineering creativity to bear.

It seems like a dull problem and the peanut gallery for this one is angry and kind of stupid. There will be more interesting things to do with their time than play whack-a-mole with 17 year olds.


I can think of ways of very easily making it better by making it incorporate a prescriptive sense of diversity more for prompts that are more hypothetical and less for prompts that are more grounded in historical reality.


That seems like a smart and actionable heuristic to me. Although I have to imagine there will remain an irreducible tension between people who would like to see what is and those who like to see what they imagine should be.

Admittedly I don’t care enough to try and bring out this kind of behavior in the models, but I was interested in how DALL-E’s purported system prompt [0] approached it:

> // 7. Diversify depictions of ALL images with people to include DESCENT and GENDER for EACH person using direct terms. Adjust only human descriptions.

> // EXPLICITLY specify these attributes, not abstractly reference them. The attributes should be specified in a minimal way and should directly describe their physical form.

> // Your choices should be grounded in reality. For example, all of a given OCCUPATION should not be the same gender or race. Additionally, focus on creating diverse, inclusive, and exploratory scenes via the properties you choose during rewrites. Make choices that may be insightful or unique sometimes.

> // Use "various" or "diverse" ONLY IF the description refers to groups of more than 3 people. Do not change the number of people requested in the original description. […]

> // Do not create any imagery that would be offensive.

> // For scenarios where bias has traditionally been an issue, make sure that key traits such as gender and race are specified and in an unbiased way -- for example, prompts that contain references to specific occupations.

[0] https://the-decoder.com/dall-e-3s-system-prompt-reveals-open...


Well said. It’s all risk avoidance. Sometimes misguided, but nonetheless.

Couple that with the fact that someone looking to be offended will always find something offensive and, as you say, you can’t win.


Normies, which are the bulk of who they need to appeal to, don’t care about Tay and they don’t care about this, at least the culture war aspect.

They do care about the tools being useful, and this kind of prompt mangling does make the tool less useful.


> Since we don’t have technical solutions to these problems…

Political problems rarely have technical solutions.


It's not a conspiracy, it's a culture.

People who don't adhere get filtered out or learn to be quiet, and then you wind up with free, uncoerced and honest decisions that all cut a particular way.


There's a meta level culture problem in that most of the people working on these systems are incredibly highly paid. Out of everyone in society, after getting paid hundreds of thousands of dollars a year these people should very quickly become pretty immune to threats of losing their job if they don't go along with the group / speak up. Yet they still go along with it and stay silent .. why? Because what they're supporting matches their own ideology so there's nothing to speak up about? Or because these people have no concept of saving/investing and just blow through money as fast as they make it, leaving themselves constantly vulnerable?


That might be true in a single instance, but when it's across society across multiple countries, with deliberate clearly stated efforts in some cases, then that suggestion falls apart.


> mitigate the risk of public meltdowns like Tay [0], which now serve as PR case studies

Isn’t this current affair a public meltdown?

At this point, anyone who doesn’t think that Netflix and Google aren’t involved in a Nazi-level propaganda is either misinformed or part of the plan to erase White people from History.

All of the kids know.

There’s been countless memes about Google rewriting history with falsehoods, and Bing branding itself around giving the straight, dark answer. Everyone’s sharing screenshots of Google’s absolutely blank homepage on November 19th. Everyone’s sharing those pics of Google today, whether or not they are true, Google is getting a bad rep escalating by the day.

If given the choice between being criticized either way, wouldn’t it better to be on the side of the truth?

Google has chosen the alternative path.


> Isn’t this current affair a public meltdown?

It certainly is, which is why I mention the swing of the pendulum in the other direction (shortly after your quoted sentence) :)


Genuinely asking because I have no idea what you're talking about, and I would really like to know.

Could you expand more on this?

> All of the kids know.

Is that just an expression like "lots of people know" or "the leading people know" or do you literally mean younger people in large groups "know"?

> Isn’t this current affair a public meltdown?

From my perspective it's not. It barely made it to "major" publications, and got flagged and suppressed heavily on HN during all the early news. I also don't know a single non-tech person who even knows about this story. I definitely wouldn't call that a public meltdown.


There seems to be a false dichotomy here, where you either 1. go to school or 2. start a business fresh out of high school. At this age, a person’s frontal cortex has not even finished developing, and yet we culturally treat the moment as a final decision point that determines the outcome of the rest of one’s life. Thiel seems to sell this as option 2.; the final defense against “woke” is to abandon the institutions entirely and forge your own determined path like a little 18 year-old John Galt. But depending on how one uses the resources, it may actually provide a third option: to fuck around and experiment for a little while with low risk of life-ruining failure, while you figure out what it is you actually want to do with your life. This is a privilege afforded to many in non-US countries where gap years are the norm, or where family wealth and connections can accommodate it.


And for that matter, I did my fucking around working low stakes jobs in packing warehouses and going to house parties. Goofing off does not require 1%er priveledge.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: