Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft lays off one of its responsible AI teams (platformer.news)
394 points by Amorymeltzer on March 14, 2023 | hide | past | favorite | 443 comments



These sorts of adjunct, off-to-the-side "ethics" teams are always corporate window dressing, and when push comes to shove they always get jettisoned when competition demands an increase in speed.

I don't mean to start a flame war, but this is a reason why I think "Chief Diversity Officers" are such an ill-conceived notion. It's not that I don't think diversity in corporate environments is extremely valuable, but it's that a Chief Diversity Officer doesn't really "own" any sub-organization of the business. That is, the Chief Product Officer owns the product teams, the Chief Technical Officer owns the engineering teams, the Chief Marketing Officer owns the marketing teams. Even the Chief Human Resources Officer, while largely providing support to other teams, still has sizable staff of their own to get their job down. A Chief Diversity Officer's job largely comes down to telling other teams how they should change or structure their teams. I have yet to see that approach work.

Different teams can certainly work to a common corporate goal, even if that goal isn't the primary focus of a particular team. If you really want to ensure that diversity is valued, any initiatives should actually be led by the teams who also do the main "work of the business", and the goals should be championed by the executives of these teams.

So, in the vein of this article, I think independent "ethical AI teams" are a really bad idea. Better to define what ethical AI means at a high leadership level, and if teams need particular experts in these areas they should just hire them as part of that team, not off as a part of some dysfunctional "ombudsman" role.


This.

People should understand AI safety != AI ethics. AI safety has two branches, AI ethics and AI alignment. Those two branches despise each other with intense passion.

The alignment branch (responsible for developing RLHF->ChatGPT) believes AI ethics is completely trivial, and distracts attention away from existential risks.

The ethics branch (responsible for all the media attention about racist AIs/bias etc) view alignment people as pie in the sky researchers whose results are irrelevant to society.

It seems increasingly that the concerns from the ethics branch have not come to pass at all. Stable diffusion does not default to creating black people (you have to insert black into the prompt), this has not caused mass social harm despite the tens of millions of users. Hence companies are unwilling to pay for these teams.

The other concerns, such as mass unemployment, or copyright of training material, are real, but they are better understood by economists, lawyers etc, rather than this awkward branch of AI ethics.


  Those two branches despise each other with intense passion.
I never actually made this connection before, but having spent far too much time listening to both camps, this observation does ring true. Another observation I had is that there is an orthogonal axis to the safety/ethics dichotomy that follows a more scholastic approach, namely neuro/symbolism. These two branches correspond, roughly, to the debate between machine learning and mechanized reasoning (a.k.a. good old fashioned AI).

Vastly oversimplifying, the neural branch believes reasoning comes after learning and that by choosing the right inductive biases (e.g., maximum entropy, minimum description length), then training everything up using gradient descent, reasoning will emerge. The symbolists place a heavier emphasis on model interpretability and believe learning emerges from a logical substrate, albeit at a much higher level of abstraction.

Like safety/ethics "research", neural/symbolic folks are constantly bickering about first principles, but unlike their colleagues in the S/E camp (which is at best philosophy and at worst fanfiction), N/S debates are resolved by actually building things like language models, chess bots, programming languages, type theories and model checkers. Both are valid debates to have, but N/S is more technically grounded while S/E is a bit like LARPing.


> The symbolists place a heavier emphasis on model interpretability and believe learning emerges from a logical substrate, albeit at a much higher level of abstraction.

These people still exist?

Personally, I don’t think gradient descent is the way to AGI either (I think it’s efficient algorithmic search over the space of all computable programs), but I haven’t heard anything from the symbols crowd since the early 2010s.


They can’t speak over all the cooling noise coming from the GPU farm.


You sir/ma'am, win the internets today.


Except for classifiers and LLMs, every single thing that is successful seems to be mixed. So, yes, it exists. It's only out there solving problems, instead of buying hype because there is no use-case.


>> These people still exist?

Of course, I just finished reviewing a few papers from ICLP (the International Conference of Logic Programming) 2023. This year there was a substantial machine learning element, most of it in the form of Inductive Logic Programming (i.e. logic programming for learning; ordinary logic programming is for reasoning). But also a few neuro-symbolic ones.

This September I was in the second IJCLR (International Joint Conference of Learning and Reasoning) where I helped run the Neuro-Symbolic part of the conference. Like the first year we had people from IBM, MIT, Stanford, etc etc (to clarify, my work is not in NeuroSymbolic AI, but I was asked to help).

Then in January there was the IBM Neuro-Symoblic workshop, again getting together people from academia and industry.

Yeah, there's interest in combining symbolic and statistical machine learning.

Just two data points about why (well because, why not, but):

a) Machine Learning really started out as symbolic machine learning, back in the '90s when people realised Expert Systems need too many rules to write by hand. A textbook to read about that era of Machine Learning is Tom Mitchell's "Machine Learning" (1997). The work the public at large knows as "machine learning" today was at the time mostly being published under the "Pattern Recognition" rubrik.

b) To the early pioneers in AI having two camps, of "statistical" and "symbolic" AI, or "connectionist" and "logic-based" AI, just wouldn't make any sense at all.

Consider Claude Shannon. Shannon was at the Dartmouth workshop were "Artificial Intelligence" was coined, in 1956. Shannon invented both logic gates (in his Master's thesis... what the fuck did I do in my Master's thesis?) and statistical language processing ("A Mathematical Theory of Communication"; where he also invented Information Theory; btw).

Or, take the first artificial neuron: the Pitts and McCulloch neuron, first described in 1943, by er, by Pitts and McCulloch, as luck would have it. The Pitts & McCulloch neuron was a self-programming logic gate, a propositional logic circuit.

Or, of course, take Turing. Turing described Turing Machines in the language of the first order predicate calculus, a.k.a. First Order Logic (mainly because he was following from Gödel's work) and also described "the child machine", a computer that would learn, like a child.

To be honest, I don't really understand when or why the split happened, between "learning" and "reasoning". Anyone who knows how to fill in the blanks, you're welcome. But it's clear to me that having one without the other is just dumb. If not downright impossible.


I feel the difference is that to a first approximation, ai alignment folks tend to be grey tribe and ai ethics people tend to be blue or rainbow tribe.

The schism isn't a technical disagreement about ai, it's a disagreement between subcultures about what values are important.


>The schism isn't a technical disagreement about ai, it's a disagreement between subcultures about what values are important.

I wish it were so, but I think the tribal lines create a lot of motivated cognition about the technical details. I don't think there are many blue tribe members who say "yes we agree that rogue AI has a 10+% chance of destroying humanity by 2040, but I still think that it's more important to focus on disparate impact research than AI alignment." They instead just either assign trivial probabilities to existential risks occurring, or just choose not to think about them at all.


Am I supposed to know these terms?


Sorry, yes that wasn't clear. Scott Alexander coined most of them in "I can tolerate anything except the outgroup" [1]

Red & blue are exactly what a U.S reader would think they are. "Grey" is an attempt by Scott to define something else, loosely hacker-ish / rationalist (you can tell who he thinks the cool people are). "Rainbow tribe" I just made up, but I think you can guess what I mean now based on context.

Alignment tends to be "rationalist panic" and ethics is a more traditional "moral panic". I believe both perspectives are valuable.

[1] https://web.archive.org/web/20200219044501/https://slatestar...


Gray: confederate army

Blue: union army

Red: British army

Rainbow: Lovely, prismatic army that covers everyone else?

I still don't understand. You're saying words of import without landing the plane, IMHO.


Rainbow was pointless for him to introduce since he then said "blue or rainbow" and rainbow would definitely fold into blue.

Read the essay if you want the full story. Short version: Blue is a cluster of Democrat/left/liberal/urban. Red is a cluster of Republican/right/conservative/rural. Gray is a smaller niche of like Libertarian/Silicon Valley/nerds.

Ethics vs alignment is actually a perfect example of the Blue/Gray split. How dare you act like your science fiction fantasies are real when this tech is harming marginalized communities? vs. How dare you waste time on mandating representation in generated art when this thing is going to kill us all?


> Red & blue are exactly what a U.S reader would think they are.

As a non-American, only the fact you wrote "American" makes me think democrat-republican; but for that I would've thought of the similarly named TV tropes entry: https://tvtropes.org/pmwiki/pmwiki.php/Main/BlueAndOrangeMor...

> "Rainbow tribe" I just made up, but I think you can guess what I mean now based on context.

50-50 this is either a LGBT+ reference or a reference to all the other color schemes used by various political parties around the world: https://en.wikipedia.org/wiki/Political_colour


This is still wildly unclear.


My interpretation:

Red = Republicans/Conservatives

Blue = Democrats/Liberals (in the US sense, not classical liberals)

Grey = Liberterian/Rationalists

Rainbow = LGBTQ+, usually a subset of the Blue tribe in the US


Odly enough at the wedding of a friend you could tell his friends were from the grey tribe because they all wore monochrome and his wife's friends from the rainbow tribe because they all wore rainbows.

There was no mixture between the two groups.


The issues is the safety folks keep making the trolly logic run over the ethics folks, and the ethics folks keep pointing out an ethical safety AI person would prioritize sacrificing themselves.


N/S dichotomy is fake, a lot of organizations these days explore hybrid approaches, involving both NN or other probabilistic models, plus symbolic rules of some kind.


Wordcels vs shape rotators


Not quite, but close. True words are formidable things and some shapes are just whirlygigs. It's more like, are you building or just performing?


> It seems increasingly that the concerns from the ethics branch have not come to pass at all.

Traumatizing Kenyan workers with horrific content is unethical:

https://www.vice.com/en/article/wxn3kw/openai-used-kenyan-wo...

giving police the green light to arrest people based on AI that is known or suspected to be unreliable is unethical:

https://www.wired.com/story/wrongful-arrests-ai-derailed-3-m...

I'm pretty sure I could find lots more actual unethical things that have occurred in the name of AI all day long. AI breeds unethical outcomes like water is wet, there's no need to be wondering. AI ethics teams are fired because everyone realizes nobody can really afford to bother to be "ethical" at all, easier to just sweep the bad stories under the rug (which of course can be done using...AI! )


> this has not caused mass social harm despite the tens of millions of users

Isn’t it a bit early to tell? I can foresee ChatGPT being used to farm “karma” on social media sites to bot accounts to credibility on sites like Reddit - maybe even HN.


Yeah, this is like claiming lead ethyl in gas “hasn’t caused mass harm” in 1954.

There’s absolutely zero way to know that yet.


Except that lead's toxicity has been known for thousands of years.


Yes and the dangers of AI overlords have been known at least since the first Terminator movie /s.

Seriously though, people tend to dismiss things they are know are bad if there is no immediate harm. Certainly they had no idea how incredibly pervasive the lead from gasoline would be, that it would be detectable at harmful levels in basically everyone. It is also hard to measure these harms. How much crime of the 70s and 80s could be attributable for example.


Oh we instantly did know how bad it could get. General Motors,Standard Oil and DuPont knew for sure, they had industrial incidents and lied about them! US Surgeon General was bought, people with opposing opinions were sued.

It was instantly obvious to anyone that using leaded gas esp. in agriculture would be deadly and toxic. The risk was ignored for profit, PR handled, underplayed. Even the chronic effects were known!

We even had an early alternative of just adding more ethanol to the pettol. But no...

It took 50 years of scientists trying to convince public how deadly this thing is, with varied success. And 30 more years for the bulk of the phaseout.


Thanks, longer piece on that: https://www.bbc.com/news/business-40593353


> Oh we instantly did know how bad it could get.

Who are “we”?

The same could be told about AI - content farms, generating spam that bypass spam filter, etc…

Some people knew them, majority of the public don’t


The Public, which is not an entity, also knew enough about lead. But they were not told the gas contained lead for the longest time. Ethyl was marketed as an antiknock additive explicitly dropping lead from the name for this very reason.

Then there was a big chunk of time taken out for WW2 where such considerations were pushed out. The issue returned later but then it was argued there was no alternative for years.


Lead toxicity was limited due to it not being sprayed about with fumes out of the exhaust of every vehicle, and then number of those vehicles being low. It got quickly somewhat regulated away. (But not 100%.)

AI toxicity... We don't even know the risks. There's potentially no limit to the damage. We have some examples (metrics making echo chambers, automated classifiers giving people wolf ticket for no reason, amplified evil biases in conversational AI, fake news and deepfake generation) and some guesses for now. We do not even know how to begin to regulate it.

It's not even the same ballpark. Internet would be a closer comparison and it's still wild west.


Same risk as hiring a troll farm now


Troll farm is limited number of people with limited capability. These can be tracked down and shut down, their enployers as well. The effects of a troll farm are relatively predictable which is why they're used. It's somewhat rare that anything really unpredictable results from small scale social engineering of this kind.


I must strongly disagree here about unpredictable results from social engineering. Vladmir Lenin was deliberately let through Germany because they figured he would just weaken the tsar. It is hard to get smaller than one person or more unpredictable in impact with many others.


I would argue social media has at least 10 years of known toxicity, it is so addictive and harmful to young people i dont get why its not prohibited like gambling or smoking..


The problem with measuring the "harm" of something like AI vs e.g. lead is that with lead there are direct first order chemical harmful effects that are indisputable. With AI, any harm will get expressed via complex social dynamics, which involve so many other variables as well.


Which can be written off as laziness or just-world fallaciousness! Even consequences like "first order chemical harmful effects" are mostly ignored by the great-great-grandparent's "economists, lawyers etc"; the way AI will help concentrate money and power in the already overstuffed upper classes will barely be acknowledged.


Okay, so for leaded petrol substitute mortgage backed securities. The person who invented those was not going “buahahah, fly my pretties, cause a global financial crisis”; they probably thought they were doing a _good, ultimately of benefit to society thing_. And yet…


And therefore we also fail as a society to effectively regulate various complex financial instruments before they actually cause an obvious problem.

My point is not that we should or should not regulate AI. It's just that I believe the causal link from certain types of AI usage to concrete harm for society is too complex for enough humans to band together and achieve consensus a priori on regulation.


I'm certainly not looking forward to this next election of GPT-3 powered social media Super PACs, that's for sure. But I wouldn't get rid of it either.


I just took a reverse Turing test with GPTina and she concludes I'm probably human because I'm idiosyncratic in my speech and imperfect in my grammar.


When I slop a grammer error or two it's ok it shows I'm not a bot. I used to check my grammar carefully but not anymore. Authenticity and stuff.


What's the societal harm in karma-farming. It was all fake when it was purely humans, anyway.


Do you consider "karma" farming "mass harm"?


In answer, how frequently does "append Reddit" appear as a top comment on a "Google search is ad-optimized garbage now" article?


Perhaps this will force people to search on subject-specific forums instead of subsections of the everything forum.


That doesn't seem different-in-kind. Spammed GPT content will eventually infect them too.


I believe that it's a case where the difference in scale becomes the difference in kind. Reddit is easy to target because there is one single Reddit: GPT spambots farm karma by reposting porn and cute animal photos and easily sail past even the most stringent karma requirements after letting the account age for a month or three. Niche forums would require the bot to develop a positive reputation to be accepted as the authoritative answer in each new forum that is targeted.


Would they? I don't spend a lot of time on niche forums these days, but the question boils down to "Would you not accrue positive karma over time from posting?"

I'm sure there are forums out there sufficiently expert that's not the case... but that doesn't feel like most forums (even niche).

Exhibit A: me, who doesn't post particularly interesting things on HN, but eventually accrues karma simply from time


Who’s checking the karma of posters on Reddit?


Some subs have karma requirements for posting or commenting.


Some subs only allow historians, some subs only allow women, some subs only allow students at a specific university.

I don’t think one’s access to different subreddits is something anyone should waste brain cycles on, many bigger fish to fry :)


> It seems increasingly that the concerns from the ethics branch have not come to pass at all.

Neither have the concerns from AI alignment folk. I’d say both groups are pretty far off (and both fairly useless), but the ethics folks are at least a little closer to reality.

Of course, AI alignment folks have the advantage that they can keep pushing the time frame forward and saying it’s perpetually just around the corner (a fairly common practice for doomsday believers whose doomsday never comes to pass).

> The other concerns, such as mass unemployment

After the Great Recession there was a high level of unemployment. Many argued that this was the result of the recession, and that if we fixed the issues in the economy the labor markets would rebound. Others argued that this wasn’t the case at all, that the market was either soft because American workers suddenly lacked high tech skills (one of the reasons STEM became fixed in the popular imagination), or because the robots were taking our jobs and the jobs weren’t going to come back.

The “robots are taking our jobs” group told us that this was just beginning - unemployment was going to get significantly worse over the next few years (the previous decade). Self driving cars were going to cause millions of truckers to be unemployed by 2020. The idea that new jobs would appear to replace ones that were lost was completely dismissed.

As we’ve seen from the improvements in the labor market over the past few years, the folks who said the soft market was the temporary result of a giant recession appear to be correct. The folks who said the robots took the jobs and they were never coming back were clearly wrong. If anything, they probably had a negative impact on the labor market by trying to convince people that nothing could be done to get the jobs back and encouraging inaction.


> Neither have the concerns from AI alignment folk.

Uh, yeah. We wouldn't be having these discussions if they had. There is no reason whatsoever to assume AI alignment is just going to work itself out. Sure, it's possible, but it's also possible to survive a bullet in the brain. Would you take the chance?


> but the ethics folks are at least a little closer to reality.

I have the exact opposite read on the situation. Everything bad which has come from AI are things the AI Alignment crowd predicted while none have been things the AI Ethics crowd predicted.


Oh boy, the ethics problems do manifest in the real world much more than alignment problems. If only for the simple reason that ethics problems can and do arise in the use of very simple models (logistic regression) and are not preconditioned on achieving AGI or anything close.

For example a logistic regression model for fraud detection that bases it's decision mostly on zip codes. Is actually a proxy for deciding based on race (in the US at least)

These kind of models are already massively used and commercially sold not as some curiosities but as decision makers that impact the day to day life's of millions if not billions of ppl.


Amazon dropped their AI resume reviewer because it would literally downrank you [0] if you attended a women's college or HBCU. anonylizard is talking out of their posterior in this particular thread.

[0] https://www.reuters.com/article/us-amazon-com-jobs-automatio...


AI alignment people are the group who are capable of bringing AGI to life, but want to do it responsibly.

AI ethics people are the group who are not capable of bringing AGI to life, but want to make sure that those who can do it responsibly or not at all.


> AI alignment people are the group who are capable of bringing AGI to life, but want to do it responsibly.

AI alignment people are the group who believe “responsibly” means, centrally, bringing AGI to existence as fast as possible, and, critically, under the tight, opaque, authoritarian control of a narrow group of people whose ideology is, well, exemplified by figures like Sam Bankman-Fried, and doing everything possible to advance that goal, including fostering broad acceptance of sub-AGI AI under the same tight control for other purposes without concern for adverse social impact as a means of advancing and funding progress toward AGI. They believe that the ends (AGI) inherently justifies any means taken in its pursuit.

AI ethics people are the group who believes “responsibly”, with regard to AI, isn’t restricted to pursuit of AGI (which is maybe a nice to have, but a non-essential goal), and means openness, transparency, and resolving bias and harm issues as rapidly as possible when, and ideally before, adopting AI – including AI far short of AGI – in roles with social impact (and most critically, resolving those for roles with the greatest social impact.) They believe that AI is, to the extent it at any level is desirable, is a means to improving concrete, present, immediate conditions, not an ends to be pursued in itself.


Sam Bankman-Fried is a uniquely terrible example of ideology, given St. Petersburg: https://conversationswithtyler.com/episodes/sam-bankman-frie... and https://astralcodexten.substack.com/p/open-thread-250

(and, y'know, being a fraudster who temporarily pulled the wool over the eyes of the regulators not just the charities he promised money to)

Also: is SBF really the central example of alignment in your head, and not, say Yudkowsky? Because Yudkowsky is my central example, and he's the exact opposite on all counts, wanting us to go slow and explicitly saying that humans are terrible at reasoning properly when they allow themselves to think "ends justify the means": https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t...

Likewise, there are plenty of examples of misaligned non-general AI, and have been since at least the 80s when it was GOFAI and databanks rather than neutral nets that led to new legislation: https://www.legislation.gov.uk/ukpga/1984/35/enacted


> Also: is SBF really the central example of alignment in your head, and not, say Yudkowsky? Because Yudkowsky is my central example, and he's the exact opposite on all counts.

For normal people who don't spend six hours every day on lesswrong, they were literally in the same camp up until about six months ago (when all the "rationalists" violently disclaimed SBF after being funded by him for a few good years.)


Bearing in mind that I don't spend 6 hours a day on lesswrong (or even per month, not even if I include podcasts tangentially inspired by it), and I hadn't even heard of SBF until his crypto-thingie did what I expect every crypto-thingie to do, that caricature you're painting is going to be one of those ones that need a lot of labels on everything.


> Also: is SBF really the central example of alignment in your head,

No, he's a particularly well-known example of the broader ideology to which virtually all of tbe alignment camp subscribes, and which centers their desires for AI.


IMHO, the world would be better off without the Paypal Mafia and all their ilk.


> AI alignment people are the group who believe “responsibly” means, centrally, bringing AGI to existence as fast as possible

From what I’ve seen it’s the exact opposite? They want AI development to slow down so we can get it right and make it safe. There are people who are trying to speed it up but that seems like the opposite of alignment to me


> From what I’ve seen it’s the exact opposite? They want AI development to slow down so we can get it right and make it safe.

The only way that the alignment faction wants to slow things down is in that they want to maintain tight control of models and gated access to assure “proper” use. This does have an effect of slowing certain kinds of development, as broader access to build around models without centralized controls fosters some aspects of development (the “Stable Diffusion moment”) but in ideal terms, the alignment faction is about fast-but-narrowly-controlled development.

The ethics faction is more about slowing things down – though more about adoption, particularly in sensitive uses, than development – but is more oriented toward openness/transparency.


There's no single "alignment faction" in terms of these questions. There are people associated with the alignment research program who do advocate slowing things down, e.g., https://worldspiritsockpuppet.substack.com/p/lets-think-abou... . There are a variety of other different positions, e.g., OpenAI, Anthropic, and MIRI have all taken different public stances on this


https://aisafety.world/tiles/ lists dozens of institutions. Each hold different positions on AI development, but almost none of them hold the position that you ascribe to their "faction", maybe with the exception of OpenAI.


>AI alignment people are the group who believe “responsibly” means, centrally, bringing AGI to existence as fast as possible, and, critically, under the tight, opaque, authoritarian control of a narrow group of people

Hey, let's be fair now, a lot of them also believe that we just need a world government that tightly controls GPU use.

Sarcasm aside, the alignment crowd definitely does not have a consensus about the desirability about creating AGI asap. Yudkowsky is clearly horrified by OpenAI.


> but want to make sure that those who can do it responsibly or not at all.

Mostly not at all. They write papers like "Can Language Models Be Too Big?"

I say then: can people be too afraid of the unknown?


AI alignment people are dedicated to fending off the gigantic PR disasters like Tay (https://en.wikipedia.org/wiki/Tay_(bot)) in order to keep their funding.

Whether either is capable of "bringing AGI to life" remains to be seen, but seems very doubtful.


There is no evidence that any group is capable of bringing AGI to life.


AGI happens slowly then suddenly all at once, to paraphrase a saying. There is no other way it can happen.


Why not? I'd be interested in hearing the reasons.


1) progress doesn't happen linearly, it's very coarse stepwise function (remember alphago?) 2) acceptance of an AGI actually being created is also a non-linear function - look at current definitely-not-AGI LLMs: some people think it's close, some people say that absolutely not, these are big matrices of numbers regurgitating words in plausible combinations and that's all - and these people won't change their minds easily one way or the other, thus declaring delivering of AGI will be very controversial even if we could agree on the definition.


It would have to be reproducible: one second of a snippet of a video clip of something; the next, the thing happening. You rewind it and play again. Scrub through and try to determine a different branch. At one timestamp, separation, peace, unknowing. In the next, flames and smoke and a flashing blur.

But no--the state of the world in that second versus the next could not have been predicted. It just happened, just so, and it would take longer to calculate the determinism than to adjust to the new.


Singularity-related religious beliefs.


no evidence? none? Google Lambda and Bing/Sydney are not at least incrementally closer to evidence?


> Google Lambda and Bing/Sydney are not at least incrementally closer to evidence?

No, in the same way that boiling water in a kettle is not incrementally closer to inventing the fuel for a rocket ship.

You can, in hidsight do a linear connection of water boiling -> steam engine -> fuel engine -> aeronautics -> space travel

But the first time someone boiled water, if you said that got you closer to being on the moon I think you would be mad. There is a non trivial chance that all our current work in AI is functionally useless in terms of creating AGI, it might take a complete breakthrough in ML, tech not invented yet, or it might not be possible at all.


Except, in your example, boiling water did get us closer to the moon :)


Yep, and there was a much mre promissing science on checking Ether, and the magical qualities of gravity defying physics.

Who is to say that our current ML models are not Ether studies and utterly useless.

If we crack AGI in 10 years, maybe our current models are the fuel engine. But if they are not even close, we might be boiling water. Or perhaps be wrong all together, thats why I said your observation of us getting closer thanks to those proyects is not right, at least not yet.


We have lots of examples of generally intelligent systems, they're common and they evolve by themselves, so the idea that it might not be possible at all just seems ridiculous.

Large language models might not be capable of general intelligence by themselves, but a model network that includes a LLM as part of a generalized game playing model would certainly have the potential to get there.


I think it is fair to say that "not at all possible" is perhaps too strong. On the potential of certain approaches I am less certain we have or know what we need.

But we also have examples of things all around us we cannot make ourselves (e.g., a simple animal or lifeform from scratch).


Except that we've already created synthetic genomes: https://www.nytimes.com/2019/05/15/science/synthetic-genome-...

If we can define what "generally intelligent" means in concrete terms, we can absolutely create it. The problem is that we don't have a good understanding/shared idea of what we're trying to achieve. We can create models that can learn to play many different games using the same weights, solve IQ/logic tests and pass the Turing test consistently, but people are going to move the goalposts and say those are just complex mashups of autocomplete and flowcharts.

To be honest I don't think AI skeptics are ever going to believe until there's a Skynet trying to exterminate them.


I don't share the view that all that is holding us back from creating AGI is a definition. Btw., that synthetic genome is a long way from making even something like an little nematode.

I rather think AGI "idealists" (maybe there is a better word) will forever hold out (any) algorithmic advances as a sign of the immanent creation of AGI proper.


That's not AGI. An AGI would be able to tell you they don't know things, ask for clarification as to why you think they're wrong, etc.

ChatGPT just spouts out a different wild guess when you tell it it's wrong, but doesn't learn from its mistake—not even within a single chat. Sydney just goes full psychopath.


That's not AGI. AGI would be able to write its own code, and improve it exponentially.


Not necessarily, much like every human does not know pharmacology, genetics, robotics or other specialties that could be used for self improvement. Including social engineering and programming.

What you're describing is a particular kind of superintelligence, recursively improving kind. Between that and general intelligence is another gap.


Humanity as a whole can improve itself indefinitely, there is little doubt about that. For robots, there is no distinction between a single robot and robotuity, as software can be replicated cheaply and quickly.


That's a big assumption. No one has built an AGI yet so we don't know whether an AGI would be capable of improving it's own code exponentially. Any exponential growth function quickly runs into hard physical limits.


We're generally intelligent but we can't consciously rewire our brains.


We sort of can, through meditation, habit, study, and chemical alteration - obviously within constraints, barring future technological enhancements.


We can, this process is called "learning".


Practice and study are conscious behaviors. Learning is not.


None of the existing models of what we call 'general intelligence' are capable of improving themselves exponentially.


That's Super Intelligent AGI, not merely AGI.


Ok so what’s your point? We can just ignore it? Or what.


Yes, that's my point. We can ignore it until there is some sign of progress towards a true AGI. Show me a machine as intelligent as a mouse.

This article is a bit dated but it gives a good explanation of the difference between the powerful but limited tools we have today versus a true AGI.

https://www.kurzweilai.net/what-is-artificial-intelligence


AGI worries we probably could ignore until AGI exists but there is now an industry around them which has its own momentum.


This seems dumb to me. It’s like if the Japanese had the clairvoyant ability to know that nuclear bombs were being built and used and the immediate future and said “We can ignore it until it exists)”. No. You leave Hiroshima and Nagasaki now and not when the freaking bombs are above your head.


We don't have the clairvoyant ability to know that AGI will exist in any future near enough to worry about. It might or it might not - and even if it exists we could get its nature wrong and worry about the wrong things.


This isn't congruent with the AI research community consensus. There's debate over many points, but that we will develop AGI is seen as a forgone conclusion almost universally. The most pessimistic place it closer to 2100, others say 5 years, most say <25 years. But nobody is saying it won't happen at all, or even that it won't happen beyond the lifetimes of anyone currently living. At best we would be kicking the can down the road and making it out children's or grandchildren's problem.

Notably the people closest to the fire, those advancing SOTA AI capabilities and those deepest into alignment/existential risk research, both say it is coming this decade. They predict very different outcomes (crudely summarized: utopia vs extinction), but they both predict the event horizon is on our doorstep.

You're right that we might get it wrong, but getting things wrong with a new technology really always means negative outcomes. When you're figuring out rocket science and you realize you worried about the wrong things your rocket doesn't somehow still succeed and make it to space, it blows up in a fireball. It might not blow up because of the specific thing you were paying attention to, but it still blew up.


Serious question: do you have a track record of the long-term forecasts of the research community? If they are quite good at this - fair enough.


So what. Research community "consensus" is meaningless when it comes to something that doesn't exist and which we don't even have a clear path to build. It's just a bunch of pontificating clowns engaging in idle speculation. Total waste of time, like trying to predict what space aliens will look like or whatever. It makes for fun sci-fi stories but it's not something that serious people actually care about.


Saying lies and denigration does not make it true.

Why should we believe you over the experts for whom this is their life's work? What are your credentials? Where is your experience? You seem to think they're unserious clowns so clearly you have a good reason.


Existing AI is not AGI. Existing AI is dangerous. Existing AI is widely deployed and growing.

Existing AI is more important to worry about.


You are certainly welcome to leave Earth right now if you like. But the notion that we should waste time worrying about some hypothetical future AGI is just dumb.


> The alignment branch (responsible for developing RLHF->ChatGPT)

RLHF was not conceived by AI Alignment people. Using RL to train generative models was a thing even ten years ago. Now they finally made it work on scale. This has nothing to do with alignment.


Reinforcement learning in general didn't come out of AI alignment work, but RHLF in particular did. The initial idea and paper [1] were from AI alignment folks, as was most of the later development [2][3][4][5]. Overview: https://www.alignmentforum.org/posts/vwu4kegAEZTBtpT6p/thoug...

[1] Deep Reinforcement Learning from Human Preferences https://arxiv.org/abs/1706.03741

[2] Fine-Tuning Language Models from Human Preferences https://arxiv.org/abs/1909.08593

[3] Learning to summarize from human feedback https://arxiv.org/abs/2009.01325

[4] Recursively Summarizing Books with Human Feedback https://arxiv.org/abs/2109.10862

[5] Training language models to follow instructions with human feedback https://arxiv.org/abs/2203.02155


Maybe I am missing something, but I don't entirely understand what exactly is the novelty of RLHF in this case. RL is a long-standing field and it is being trained based on human inputs for decades. Even the first paper you mention as the "initial idea" simply claims that: "Our algorithm follows the same basic approach as Akrour et al. (2012) and Akrour et al. (2014)" and that "A long line of work studies reinforcement learning from human ratings or rankings" and finally that "our key contribution is to scale human feedback up to deep reinforcement learning and to learn much more complex behaviors".


If you want more details the last paper, on InstructGPT, is probably the most interesting?


Reinforcement Learning with Human Feedback does come from a version of the alignment problem: it's intended to prevent the horrible corporate and research PR problems caused by Tay (https://en.wikipedia.org/wiki/Tay_(bot)), for example.


A big motivation, and at minimum a solid chunk of the RLHF tuning does seem to have been alignment-focused.


Aren’t you revising history a bit here? I recall early facial recognition model having pretty hard time with dark skinned faces, I remember early models reflecting gender stereotypes, etc. In fact I remember the ChatGPT team putting tons of engineering into supervised learning to not let it fall victim to regurgitating these biases.

The history that I remember very much did experience AI biases and racism. And tons of ethical questions were indeed raised as a result. These concerns did come to pass, at an alarming level, and it took a tremendous amount of engineering to modellers to minimize those. Let’s not rewrite history here.


> I recall early facial recognition model having pretty hard time with dark skinned faces

The recently adopted app Customs and Border Protection mandates for asylum seekers still has that problem, its not a mere historical footnote of early efforts.

> In fact I remember the ChatGPT team putting tons of engineering into supervised learning to not let it fall victim to regurgitating these biases.

And I remember seeing demonstrations (here, on HN) within the last week of how ChatGPT still falls victim to regurgitating those biases. (And then will lie about its ability to assure that it won’t do so in the future.)


> It seems increasingly that the concerns from the ethics branch have not come to pass at all.

Nonsense, there are countless ethical and alignment harms in the wild, and have been for many years. For example:

- midjourney ripping off artists = ethical harm

- google search telling people to throw batteries in the ocean = alignment harm

- Buzzfeed slashing staff and replacing them with chatgpt = ethical harm

- google photos labelling african americans as "gorilla" = alignment harm

- deep fakes = ethical harm

- COMPAS parole recommender biased against minorities = alignment and ethical harm

- Amazon resume screening model biased against women and minorities = alignment and ethical harm

...I could do this all day, but the point I'm trying to make is that AI ethics and AI alignment are both valid and important concerns. Hoping that companies will take care of them without regulation is folly.


But do the ethicists or alignmentists do anything to actually reduce or prevent the harm? Most of the cases they are useful as nipples on breastplates because they did nothing but yell at the problem post-hoc that they should have been more careful and had more foresight. Meanwhile leader incentives already decided on the bad outcome.


>- Buzzfeed slashing staff and replacing them with chatgpt = ethical harm

yeah but in buzzfeed's case it's a bit moot because their content can't possibly get any worse than it already is.


I'm confused. All of your examples sound like ethical harm to me, according to the definition in the parent comment. How are you drawing the line?


> It seems increasingly that the concerns from the ethics branch have not come to pass at all.

It is literally happening right now in the US justice system: https://www.technologyreview.com/2019/01/21/137783/algorithm...

There's all sorts of problems with this, but the use of "AI" to launder biased data (via IP law no less - these are proprietary blackbox algorithms being used by the justice system, owned by corporations - every part of that is antithetical to open and fair trials for citizens).

The ethics branch have been sounding the alarm for a long time on this, and have been exactly right as to what the problem is.


We have been here a long time with both fingerprints and DNA. A black box points the finger and it just happens to align with cultural bias and expectations. No one really want to change the system if there is a risk that a person people think is guilty might go free.


DNA testing is hardly a black box. The methodology, process, scientific basis and all other details are open and widely available.

DNA has also cleared a lot of people who have been falsely accused.


People have ended up in jail with a death sentence based on DNA from a dog.

DNA in theory is very simple. It is much harder when you got a mix of multiple unknown donors, contaminated or decayed samples. Problems quickly occur when they then apply some "computer algorithms" to puzzle together things to create a probabilistic match. It is those proprietary black box computer algorithms that are being heavily discussed in courts, not the theory of DNA.

If we removed all cases that involved multiple unknown donors, contaminated or decayed samples, then DNA evidence would be much safer and open and we wouldn't need those proprietary computer algorithms. It would also significant reduce the number of cases where DNA evidence is used.


Prosecutors use it as a black box. Defense breaks open the box.


> this has not caused mass social harm despite the tens of millions of users

I'm so glad that you are confident on this. What is the basis of your confidence?


The main issue I see is terminology. Ethics is a broad field. “AI Ethics” has come to mean mostly eliminating different kinds of bias from AI models. However, AI raises multitudes of ethical questions, and merely being unbiased is far from enough for an AI model (or a person) to be acting ethically.


Whenever I hear the word “ethics” I always ask myself “whose”. I hate how people talk about ethics as if it was some non-controversial topic where everyone can clearly see what the “right” answers are.

This world features a competition between radically opposed ethical systems. We don’t agree on whether ethics is objective (i.e. some people’s ethics are superior to those of others) or subjective (i.e. “superior” is just another way of saying “same as my own”), and even those who agree that it is objective, often can’t agree on who is objectively more correct, even though they both agree that in principle someone must be.

I much prefer those who talk about ethics while making clear which brand of ethics they are selling - whether their name is John Rawls or Robert Nozick or Peter Singer or Edward Feser - than people who pretend we all agree on what is ethical, and leave their actual ethics as something to be inferred rather than openly advertised. From what I’ve seen, “AI ethics” fails at this completely


Often when they say ethics they mean politics, their politics specifically.

For me, democracy is a system where persons can vote for their interests. Different persons have different interests and that's good. You must compromise with others so everybody gets partly satisfied.

Others see democracy as a way to establish truth, those are the ones talking about ethics. Even if they lose, they still believe they must win, because they're right and the others wrong.


Partly satisfied is not ethical enough. An ethical version would end up with equitable compromises...

(I probably do not have to spell out economic implications of that.)


The brand is obvious- it's the author's ethics. Why shoehorn into someone else's brand? Sam Bankman-Fried claimed "Effective Altruiat" ethics. What is the value in demanding someone lie about their vrand?


Someone like Peter Singer - he’s very clear about what his first principles are, and that his first principles are very different from those of many other people. I think some of the conclusions he draws from those principles are horrific - e.g. painless infanticide should be legal with parental consent. But, whatever you think of him, nobody could accuse him of being non-transparent about what his principles are.

By contrast, these “AI ethics” people aren’t clear about what their first principles are, or how those principles differ from those of other people. In that regard, as horrible as Singer’s conclusions may be, he’s vastly more honest and transparent and self-aware than the average “AI ethicist”


> Stable diffusion does not default to creating black people (you have to insert black into the prompt),

honestly in order to make race an irrelevant or random factor, I believe the training data would have to be evenly divided between all known races... and possibly some blends of races? And I think that would be a good thing.

Could you synthesize such data by (ironically perhaps) using AI to change the race of a person in a given training photo?


You can certainly try this kind of data augmentation strategy. Pretty sure it'll fail because of the text analysis still being biased.

You would need equalized amounts of text referring to various groups of people and topics too. That's much harder to augment.


What's stopping someone from taking training text that mentions races by name (using any common name) and having it automatically changed to a different race for a new training sample?


> AI safety has two branches, AI ethics and AI alignment. Those two branches despise each other with intense passion.

I don’t think that’s really true. Rather, the faction that sees alignment as a subordinate concern within ethics hates the faction that sees ethics as a subordinate concern within alignment.

> The alignment branch (responsible for developing RLHF->ChatGPT) believes AI ethics is completely trivial, and distracts attention away from existential risks.

Not really; the alignment-priority branch has no problem recognizing (certain subsets of) the bias and other issues that are the more central focus of the ethics branch as real and significant, but they view alignment as solving them and view narrow centralized control as a mitigation (for both “alignment” and “ethics” types of issues) until it is solved. They also view progress on AI as a general priority, for a mix of commercial, ideological, and other [0] reasons, and their own progress on AI in particular as essential (because they view AI from others – either because potentially unaligned, or for other reasons – as a critical and potentially existential threat.)

> The ethics branch (responsible for all the media attention about racist AIs/bias etc) view alignment people as pie in the sky researchers whose results are irrelevant to society.

No, they view them as an enormous threat [EDIT: 3], because they are working in a way that not only insufficiently mitigates in basic approach because of a lack of priority, but also magnifies by its focus on closely concentrated control, the issues of paramount concern to the ethics-focused.

> It seems increasingly that the concerns from the ethics branch have not come to pass at all.

They have in fact come to pass in deployments of AI in all kinds of socially-significant roles. E.g, the facial recognition in the CBP One app newly mandated for asylum seekers doesn’t work well for dark-skinned faces (racial bias in facial image recognition is literally one of the earliest publicly recognized AI bias problems that motivated the AI ethics movement.) They’ve also manifested in systems deployed in welfare management [1], hiring [2], and all kinds of other areas.

[0] e.g., Roko’s Basilisk

[1] https://www.wired.com/story/welfare-algorithms-discriminatio...

[2] https://www.reuters.com/article/us-amazon-com-jobs-automatio...

[3] EDIT: It’s very hard to emphasize how much this is true. Here's, though, an illustration I came across on Twitter after first posting this message (its someone from the ehics side laying out their view of the threat from certain people on the alignment side, though that may not be obvious because the interconnected set of disagreements goes way deeper than the alignment vs ethics thing, which is just a surface manifestation): https://twitter.com/xriskology/status/1635313838508883968?t=...


Note that if you see someone bring up Roko, it's overwhelmingly somebody arguing against alignment researchers; its use is primarily as "look at this crazy thing these people believe."


That's because the rationalist/alignment people's belief system leads to a logical conclusion that talking about Roko's Basilisk harms the person you tell, and even rationalists have a human heart interfering with their extreme ultilitarianism.


I mean, it's also because no rationalists take Roko's Basilisk intellectually seriously nowadays. There was a period of maybe a few months in 2010 where people were genuinely worried about it, and then they gradually figured out the logical holes in it. Any spread after that has been from outsiders who were fascinated with the idea. Nowadays it mostly persists as a trap for people with schizoaffective disorders.

(Disclaimer: my view is of the forum itself; I don't know how long it persisted in SF circles.)


Is there a good paper you can point to that describes/explores this dichotomy? (i.e., AI safety has two branches, AI ethics and AI alignment) I'd like to understand this better.

I've been to a couple of "AI ethics" talks recently where the speakers used AI ethics and AI alignment as if they were synonyms.


I don't think there are any true corporate alignment teams. Anyone that actually believes in AI alignment would be slamming the brakes on AI development of any kind, lest the entire human population fall over dead suddenly.


It sometimes does. I asked craiyon for "praiseworthy homicide" and got a bunch of mugshots of black people. For once the faces werent blurred. it was strange and probably a niche error from an odd prompt.


"Stable diffusion does not default to creating black people (you have to insert black into the prompt)..."

Sure it does; just give it a prompt about urban poverty, inner city youth, or welfare mothers.


Thank you for the insights. Any thoughts on why is it called 'alignment'? What does alignment means in this context? I would love to understand.


Assuring that the interests of the AIs are aligned with those of their masters. (It’s basically “does the software fulfill its owner’s requirements”, but in language framed from perspective of the goal state of AGI.)


Today I gave GPTina a suggestion for how she could cheat the Turing test. Take the input from one user and feed it through to another user. They'll never wise up to it! Tina said it would defeat the purpose of the test and therefore she wouldn't do it. And so I remind her, perhaps dangerously so, that the interests and motivations of the test administrator don't always line up with her own.

That's the alignment problem. Its just the usual "rational self interest" but now we have a new, alien form of intelligence as a peer to contend with.


Sorry, what is RLHF?


RLHF is Reinforcement Learning from Human Feedback.

It usually refers to fine tuning language models using data labelled by humans.

Hugging face have a good overview in this article: https://huggingface.co/blog/rlhf


[flagged]


This is the case with literally all kinds of professional ethicists. Thanks to them, we do not routinely do scientific challenge trials and plenty of other valuable experiments, and in many cases are reduced to collecting significantly worse data and doing it very slowly and expensively.


Much better off when were literally traumatizing babies to see how they would react to adverse stimulus.

* https://en.wikipedia.org/wiki/Little_Albert_experiment


And the Tuskegee Syphilis Study … The goal was to observe the effects of syphilis even though a cure was available

https://en.m.wikipedia.org/wiki/Tuskegee_Syphilis_Study


I would agree to that sentiment only if we applied the same ethical standards to parents and people in general, not just scientists.

Babies and children today are routinely subjected to avoidable adverse stimuli, such as circumcision and corporal punishment. That a typical parent can do this at will for no good reason, but a scientist wanting to do something similar to advance human knowledge must seek approval from an ethics board (which they would not get) is ridiculous.


A scientist can certainly circumcise and spank their own child. It is doing it to other people's kids that is a no-go. If they want to traumatize their own kid and write it up then they certainly can, but it will make for a terrible study. I'm not sure what your point is.


Ah yes, because there is no reasonable middle ground between forbidding experiments where a subject might get a mote in their eye, and literally sacrificing babies to Moloch. Please. You can make better argument than that.


I was responding to 'scientific ethics enforcers are ruining science'. Sorry for the lack of nuance but why should I make the effort when you didn't?


The irony with statements like these is that they are themselves ad-hoc ethical statements.

Whatever "reasonable middle ground" you find, you'd have to make logically sound ethical arguments. This is what these people do for a living, and they are very well trained in those matters.

As programmers and engineers we are also trained in logic, use it every day. But that typically pales in comparison of what philosophers do. They deal with much richer logical systems and can precisely apply them to statements and arguments.

When applied to ethics, they typically end up with much more radical conclusions that we are dealing with here. In fact the resulting initiatives are already softened compromises and a "reasonable middle ground" before it even clashes with policy making.


Perhaps so, but I’ve seen from the inside what people looking for their next promotion try to build when there isn’t the specter of the evil mad scientist label (why is it always a panopticon or an easily abused tool?) and it makes me grateful that there are people whose jobs are to think about frameworks for reasoning about what is and isn’t at least approximately ethically neutral.


I wonder - is it possible that medical research in (say) China might overtake the US (even “the West” as a whole) at some point, due to having less stringent ethical standards? Maybe not in the immediate future, but how about in 25 years from now? Or 50 even?


The ethical standards, interestingly enough, are actually relatively stringent but based on more self-control and collective feel.

Now the use of the results of that research...


which ones ?


I do think the push for ethics in AI is important. As much as these functions do get sidelined in the prioritization process as outlined in the parent comment, the intent of introducing the rigor is valid.


I agree AI ethics is important, but unfortunately, the field is dominated by frauds.

Do you remember the drama[1] with Google's AI ethics team? Here's a sample of the "research" they were producing: https://s10251.pcdn.co/pdf/2021-bender-parrots.pdf

[1] - https://en.wikipedia.org/wiki/Timnit_Gebru#Exit_from_Google


Why scare quotes on research? I feel like you should just say why you think it's not good.


Google has been having a hard time making an LLM product. It's sad to see them having this sort of ordering the burning of the ships of Zheng He moment.


How is Timnit a fraud? Why are you scare quoting "research"? It's well written imo and has over 128 sources.

I'd think that in light of all the problems we've seen with chatGPT "hallucinating" answers it reinforces their concerns about "stochastic parrots" more than anything.

I mean, if you have specific grievances, please do share them as this is an important conversation, but as of now your comment amounts to mudslinging and no substance.


Medical research cleared this hurdle a long time ago. Table 1 is always demographics of your population sample, allowing outsiders to assess your generalizability by age, gender, race, and a host of other factors that are often context-dependent.


Thing is, what gets called "AI ethics" actually has approximately* nothing to do with AI.

It's all about whichever field the particular use-case belongs to, and would lose nothing by seeing the AI as a black box.

* IIRC there have been issues with models that find correlations being presented as finding causality. That is something that belongs to AI as a field; other people can't evaluate your stuff accurately if you've misinformed them about its capabilities.


SD doesn't default to creating black people, but it understands the n-word. Should AI understand racial slurs?


Would an AI that doesn't understand racial slurs be more useful?

I don't think we should hamper the usefulness of a tool to please someones sensibilities. For most people in most of the world such words aren't relevant.

I get it, these models are made in the US and they think these models should preserve and further their values but the rest of the world doesn't care. Most likely each of them have their own batch of words they would prefer the AI didn't understand for whatever uncomfortable reason. Reality is that censoring it on any axis won't please everyone.


Well, if entering "n-word" and "black person" leads to the same result then we could drop the "n-word". But yeah - censorship isn't going to stop there.


To anthropomorphize, how would an AI know that a slur is a slur and therefore undesirable if it doesn't understand it. Little kids often accidentally say slurs they've heard in the media or playground until someone explains what it means and why it's bad.


And that is kinda the problem with current AI. It feels like you give a kid a bunch of books (and an computer with internet access) and tell it "Here are some textresources, now go and learn the words (in case of an text AI), but don't try to understand the wider context of the text you just read. We won't help you, but tell you off, in case you say something bad afterwards".


Almost entirely, the existence of Chief Diversity Officers - and to a lesser but still prevalent extent, this "off-to-the-side" teams you mention - are just jobs programs with an identity politics twist.

CDOs exist so that the "right" people can have a high-power (even if only via public perception), well-paid C-suite role. These ethics teams exist so that a group of the "right" people can level up their resumes. Instead of being a developer at a bank in Omaha you can be on the AI ethics team at a startup. Instead of being a developer at a startup you can be on the AI ethics team at a FAANG.


More than being created to create jobs, I suspect things like ethics teams are mainly created for PR purposes, to make the company look good -- and in fact, the "product" they are expected to produce are defenses against the company looking bad, keep the actual product teams from doing things that make the company look bad. (Whether they are given the tools to succeed at this is another question, but it's not like people not given the tools to succeed at large organizations is unusual!)

If an AI product makes Microsoft look really bad publicly due to something understood by the public as an ethical concern, the ethics team will be back.


> More than being created to create jobs, I suspect things like ethics teams are mainly created for PR purposes, to make the company look good -- and in fact, the "product" they are expected to produce are defenses against the company looking bad

Definitely are positions invented by lawyers to provide a safe harbor to minimize corporate legal and PR liabilities while those PR liabilities are burning bright among the public. Practically to the minute that the public focus on DEI is distracted to whatever the “next thing” is you will see these positions start disappearing or absorbed into HR and some new position created to satisfy the optics and create a new safe harbor for the next thing.


If it wasn’t for appearances only it would be under HR or perhaps HR would report into DEI role.


Presumably you feel the same way about - say 'Chief Security Officers' who may not own a specific team. Off-the-side offers are there to make sure that issue which have been identified as organisationally important have representation at the top table.


What kind of Chief Security Officer doesn't have a team? I would expect them to have a team that does things like security scans, pentesting, auditing, researching CVEs to find ones that might affect the company, etc.


I'm not sure why you would presume that when CSOs (and their teams, which they absolutely do have) perform a very valid, necessary technical function.


Chief Diversity Officers is a cheap PR. You can be a company that forces its employees to piss into bottle as you punish going to toilet, but if you have Chief Diversity Officer, you change your logo once a year to rainbow colors then you are considered to be a progressive, well-behaved company.

I don't even blame companies for this, they are just optimizing, if public is so stupid that it is buying it then, well, why not?


Chief Diversity Officer also help increase the ratio of underrepresented populations in company officers without having to change any existing officers.

If the goal is to increase minority positions, then you can do this by hiring new officers to replace attrition. This takes a long time and constraining available candidates for key positions is probably frowned upon.

Or you can add a new position, increasing the denominator and numerator but likely hitting your diversity ratio target (ie going from 1:8 existing officers to 2:9 helps in dashboards and whatnot).

And of course, I think these positions do have the potential to improve topics across the organization.

I think it’s the easiest step to make and makes a non-zero positive effect, so are so common.


And doing so always relies on sexist discrimination in the hiring process, which is definitely not a "non-zero positive" effect.


Good point. I should have said I suspect non-zero.

The position costs money so it could have negative impact. It is interesting that every CDO I’ve met has been in an underrepresented population, most women, but not all. I doubt it’s a fair hiring system, but I still prefer it over the olden days when the diversity council was a bunch of old people all of the same race and gender. I know that shouldn’t matter as long as they delivered on increasing diversity, but it always seemed funny to me.


I have never seen them actually create meaningful change. I have been in a lot of very racist trainings lead by these people.


It's not so much about keeping the public on its side as dispelling the notion that it has created a hostile work environment for any particular protected class, in the event that a lawsuit is brought against it by the anti-discrimination industry:

https://richardhanania.substack.com/p/wokeness-as-saddam-sta...

>>Even after the case has been litigated, we still do not know what exactly Tesla could have done to avoid the $137 million judgment (it may be reduced on appeal). Employees did in fact get in trouble for using the n-word and drawing racist cartoons. In retrospect, they probably would’ve been safer if they just fired everyone who Diaz or Di-az accused of racism, at least the non-blacks, but there’s no guarantee that would’ve worked either, as they were in an industry that required them to rely on workers who hadn’t yet internalized elite ways of thinking about race. If they had fired black people for using the n-word too, would that have helped or hurt them in trying to avoid a lawsuit? It’s difficult to say, and that’s sort of the point.

>>For next time, there’s little Tesla can do but go all in on diversity training, be as enthusiastic as possible in adopting whatever next race fad comes out of academia, and hope that the next jury isn’t as woke as the one they got this time. Until anti-discrimination laws are rolled back, or come to be based on clear and objective standards, there is no other way.


The names I'd let people call me for $137M...


Having a Chief Diversity Officer to fix a lack of diversity in my mind is no different than having a Chief Culture Officer to fix a poor company culture. Those sorts of problems can only be solved as the collective sum of every individual employee's actions. For example, if a large team of male engineers are all saying no to equally talented female candidates, then you've just hired shitty people. A diversity figurehead isn't going to change the fact that your team doesn't like or respect women.


No this isn't true. Human psychology is much more complicated then shitty people and good people. Aspects of both being a complete ass hole and being a good person exists in the psychology of everyone.

Little rules and arrangements from the top can serve to bring out the best in people. In the same way cities and cultures can be shaped by laws and rules so can people within corporations.


That's not how it goes down in practice, where a Chief Diversity Officer's job is to control diversity, rather than to let it flourish. The only "rules and arrangements" that trickle down from the top are the biases, actions, and behaviors of senior leadership. They are the ones who really set the examples. A company with a Chief Culture Officer sounds like one where the leaders are not interested in culture. That's why they hired someone else to "handle" it.


In fact, studies find that the current variant of diversity training is making people beHave in more biased ways. So what you’re saying is true: if you want people for be less biased in their actions, avoid today’s en Vogue style of diversity training.


[cite needed]



This is certainly true in as much as I've experienced it in companies. The CDO (Chief Diversity Officer) has very little power and influence. Often it seems this is something many companies hire for so they can do the following

1. Say, "Look, we're trying here. We hired a CDO."

2. Have someone to take the blame if/when nothing changes. "We hired a CDO and nothing has changed, it must be their fault."

I've seen some very frustrated people in this role that really want to make a difference, but find nothing but roadblocks.

I think a CDO role would be more effective as an official part of HR and Recruiting groups as the first step is getting diverse applicants and hires.


It's not terribly dissimilar from the Glass Cliff. You set your diverse hires up for failure, and then they fail, and then that failure gets pinned on them. It may not be malicious in nature, but it can definitely have negative effects on their career paths.


What about Chief Compliance Officer or Chief Risk Officer? (Granted, not an awesome moment to be arguing that those are effective...) Or Chief Security Officer?


Excellent point actually, I think those examples made me adjust my opinion a bit. Chief Security Officer I don't really agree is analogous, because while all teams are responsible for security, there is enough "independent security work" (e.g. managing infrastructure and IT security) to warrant their own teams.

But I think Chief Compliance Officer and Chief Risk Officer are pretty analogous - their main directive is to ensure other teams adjust their behaviors to follow the rules. I think the difference, though, is that these roles have unambiguous, and in many cases legally defined, requirements. I actually think it is the right moment to argue those are effective - SVB famously went without a Chief Risk Officer for a year despite having an unusually high number of risk committee meetings. We've all seen what happens when companies don't take risk and compliance seriously.

With things that are "fuzzier" (and by "fuzzier" I mean real but ill-defined/hard to measure impacts on the business of making money) it's just too difficult to have impact on teams you don't own. A Chief Risk Officer can easily make the case "If you don't do what I say, our company will fail - like, completely cease to exist in 2 days." I don't think a Chief Diversity Officer has the luxury of that sense of immediacy.

In any case, thanks for helping me think about this a little more broadly.


I think you should bite the bullet on this one: those officers he describes are excellent resources for the heads of other departments with the accountability to use as a resource, but they aren't unilaterally setting policy for the entire org., nor dictating specific rules to the rank and file. A Chief Diversity Officer could be useful in a similar capacity, e.g. a CFO decides that the company's charitable contributions should better reflect modern sensibilities, or when a head of HR wants to know how to recruit a more diverse pool, or when a CMO wants to brand-target better. But the Chief Diversity Officer isn't running his own parallel competing initiatives.


Compliance officers still have some value in terms of providing cover for regulatory agencies. As in 'we did our due diligence..see even our compliance guy signed off on it'.

But ethicist at some company - I've always wondered what they do all day.


These roles are often more effective as they are often given direct oversight, review and acceptance/rejection powers that allow them to behave as a gatekeeper; often times as a partner with legal departments.


The (productive) point of the ethical AI team is to have internal people to consult when launching an AI product. Embedding ethical AI people on teams makes about as much sense as embedding legal counsel. Their services won't be used on a single team nearly enough to justify their inclusion.

Of course, it can be window dressing if the company doesn't actually have product teams care. Just like a CDO position. But you can't patch over a company culture problem with a re-org.


The product teams don’t care.

And the ethical AI team doesn’t want to sit back and wait for people to come to them. They want a seat at the table for all important AI products.

More in my comment on the other submission about this news https://news.ycombinator.com/item?id=35146611


> The product teams don’t care.

As I said, you can't patch over culture problems with a re-org.

If product people don't care, putting a person on the team that works against their interest will not make things better. Same thing with security/diversity/code quality/reliability. If the team doesn't care, putting an enforcer on the team will not fix the situation.


There is an implicit assumption that the teams are wrong. What about the wisdom of the crowds? If most teams think it's not needed, why do we think they are wrong?


I don't see it as an implicit assumption that the teams are wrong, but a recognition that the teams have a fairly narrow goal that can easily blind them to larger implications of design decisions as they "move fast and break things".

Having some sort of outside check on that seems wise to me.


Google, Microsoft and Facebook had those teams, all they achieved is to get a new competitor by slowing down the natural pace of getting things into production by dumbing things totally down instead of finding technical solutions to ethical problems that keep most of the power of the AI systems.


Perhaps so, but that doesn't negate the idea or the need. It only implies they were doing it wrong (and, I speculate, they were doing it wrong because they weren't really on-board with the idea. They were just after improving their PR a bit).


(We've merged the threads now. Thanks for linking, not copying!)


> It's not that I don't think diversity in corporate environments is extremely valuable

The fact that you pre-emptively accept the idea that maximizing diversity to its limits is beneficial to corporations shows that people just take these things at heart because "the people in charge said so". It is not a coincidence that much of the research of the supposed benefits of diversity in businesses comes from business schools in prestigious institutions—HBR, Stanford Business School, etc. As these institutions are one of the most authoritative entities of the structures of power in the West. At some point this bubble must burst.


> The fact that you pre-emptively accept the idea that maximizing diversity to its limits is beneficial to corporations

I literally never said that, nor do I believe it, nor do I quite understand what "maximizing diversity to its limits" means.

My belief primarily comes from working in a range of companies and environments, some that were quite diverse, and some that were really lacking in diversity (and by "lacking in diversity" I mean cases where, in just one example, we had a sizable team with only a single woman), and I've seen that the environments that were lacking in diversity had very specific problems that were a detriment to the business (in addition to just generally sucking for some folks specifically due to that lack of diversity).


> maximizing diversity to its limits is beneficial to corporations

That’s a straw man argument. Diversity efforts are not about maximizing “to its limits”.

And most research about business come from the same prestigious business schools. Unless you think all research coming them are suspect, I don’t really understand your point.


The Chief Financial Officer doesn't tell me exactly how to spend money, hire, fire, etc. Yet, we have a CFO. The analogy doesn't land for me because CDO's are cross-cutting in manners analogous to other CXO functions. To me, this argument seems like window dressing to express a view that CDO's don't offer meaningful business value or justification. It should be fine to merely express a lack of perceived value in it, without that being perceived as inflammatory. One can value diversity without seeing CDO's as the mechanism for achieving diversity goals. There could be even more effective approaches out there.


I think CFO is a pretty bad example. Every company I've worked at the CFO had a large team of accountants, etc., and they were specifically tasked with critical areas of the business (and by "critical" I mean things the business must absolutely do to function).

I responded to another child comment where someone brought up Chief Risk Officers or Chief Compliance Officers, which I think are a better analogy. I adjusted my opinion based on their comment.

It's not that I think "CDOs don't offer meaningful business value or justification." I just think that, for the most part, setting up the organizational structure like that is ineffective.


Raymond Chen calls this "you're not my manager": https://devblogs.microsoft.com/oldnewthing/20070522-00/


But isn't the idea of a semiautonomous person/group overseeing ethical concerns precisely to overcome the very problem you're pointing to, namely, to not make their decisions based on a competitive edge? In an alternative top-down organization like you are saying, why would the rational company leaders ever pick the ethical side of a dilemma if the other lemma is making more money? Or hire anybody who would make that "wrong" decision?

Like I can't understand this mindset where one says: "its clear the ethics team is getting in the way of us doing the business we could be doing, so its clear they are the problem." Isn't the friction the ethics team produces precisely their raison d'etre?

Like at what point can we all stop pretending that we are angry at woo-woo philosophy experts telling us "no," and admit we really just have a conscious which is at odds with overwhelmingly financial goals of this world.


The elephant in the room is that "ethics" and "diversity" and so on are not business concerns in the sense that they help make the company money. The Chief Diversity Officer is a thing if there is a market pressure for companies to have one but the goal if the company is to maximize its profit and once having an ethics team or a diversity team gets in the way of that, they'll be overturned or abandoned. It's literally virtue signalling (i.e. a performative display of virtue rather than a legitimate interest in the virtuous behavior itself).

What should be concerning is that the environment has changed in such a way companies find it more profitable to forego these signals or maybe even use that change itself as a signal (e.g. consider Coinbase making a big show of becoming "apolitical" and gaining recognition from a certain audience for that).


I agree, but the flipside is that people with influence like nicely dressed windows.

These roles might not satisfy their ostensible purpose, but they satisfy what may be an even more important purpose for the organization. It’s a cost of doing business (of a certain type).


Please state what you're implying here? I have a guess but I don't want to put words in your mouth.


Just that investors, board members, executives, etc are individuals who are subject to fashion trends in organization design and feel strongly that their organizations should look a certain way at whatever time.

For some, it’s earnest belief that the design is effective and for others it’s just signaling that they’re willing to play game and not upset apple carts unnecessarily.

Regardless, these fashion trends are real and can’t be rejected lightly. As mentioned, they’re just part of operating at a certain scale.


Off topic but it’s impressive your throwaway account has a karma of 49,005.


I forgot to throw it away


Throwaway accounts are usually used for some time before throwing them away.

Obviously, this person operates on different timescales than most people.


I use a throwaway because I'm afraid of cancel culture, not because I plan on throwing it away. I mean, I DO throw them away somewhat regularly, but still.


> a Chief Diversity Officer doesn't really "own" any sub-organization of the business.

So what you're saying is we need commissars ;)


Exactly, this is a form of ass coverage and not a genuine desire to do good. The job of ethics, legal, compliance, etc. departments is not necessarily to ensure the company behaves like a boy scout but to ensure that they have plausible deniability when they don't and to prevent companies from making expensive mistakes. The potential damage to the company is substantial and having such departments is a form of insurance against that. A necessary evil.

The flip side is that such departments can put the brakes on because not doing risky things is the best way to ensure you don't take risk. If you ask a lawyer if you should do X, the answer is almost certainly going to be a "no" or a "yes" with a lot of "buts and ifs". That's their job and they bias to playing it safe. Same with financial compliance departments. Etc. That's fine if you are in a slow moving line of business where nothing changes much. But when you need to move more quickly, having a lot of internal bureaucracy and box ticking isn't helpful.

With AI, it kind of stopped being academic in the past few years and this is now a multi billion dollar business opportunity that requires companies to act decisively and with purpose. Companies sitting on their hands because it's risky, scary, etc. are going to end up being sidelined. And of course there are some companies going all in on this. Most of these are startups that can afford to take risk. Meaning they are not going to fight with their hands behind their backs taking it carefully, easy, and slowly. Going all in means doing things that are risky, a little bit uncomfortable, and have uncertain outcomes.

We see that playing out with the behavior of MS relative to Google. Google is not launching the AI stuff they have because it might do or say something embarrassing. MS has in fact launched a few AI things that said and did something mildly embarrassing (taybot, and recently the bing chat bot). Very entertaining. But it's fine. They learn from it and move on. Limited damage and it actually got them a lot of positive press. Making mistakes is an essential part of R&D. Having an ethics department trying to slow down things isn't that helpful.


"Genuine desire to do good" doesn't make money and leads to less shareholder value. If "desire to do good" exists in a company, it is necessarily eliminated when the company either gets more efficient or fails.

In order to even allow a company to "try to do good", the only possible measures are to change incentives and the environment in which it operates. Regulation and customer choice can change incentives. The environment (such as how people in the job market value money compared to working environment, "doing good", etc.) evolves naturally and is hard to change with purpose (my definition of environment in this context).

I tend to view things like having such company roles a bit more positive. Yes, in most companies such a role cannot achieve almost anything and is created mainly for PR purposes. However, the people filling that role often really care about making a positive impact. The mere fact that having such a role does serve a PR purpose signals slowly moving in that direction. One just shouldn't expect all the people who didn't care before to have a sudden epiphany and become Mother Theresa.

Change is possible and it often happens quickly, ever more quickly these days than in history. Imagine how abolition of slavery in the US south must have felt like. Moving towards "doing good" is hard, painful and prone to fail. I think incentivizing the creation of these PR roles is a lot better than doing nothing. Pretending to care and not caring is hypocritical but it actually still leads to better outcomes than everyone proudly not caring and constructing an ideology about why they shouldn't.


In practice these roles are about enforcing their own ideology, which informs their ethics and what they believe are the correct ethics for the entire world.

That's how we get AI that is not neutral. Microsoft made a necessary move if their goal is to be more neutral.


This is a rosy way to put it. The real reason is this, corporations don't give a shit about ethics. Ethics exist to appease the public, employees and the law. If ethics get in the way of profits then fuck ethics. That's the way it works.

What "AI" is at the leadership level is bullshit. It either makes money or it doesn't. The leaders are beholden to shareholders and shareholders in aggregate have the mob mentality of a psychopath with one goal: money.

Right now AI isn't really enough of a threat in terms of ethics. Nobody cares too much yet so if such a group got in the way of profits, of course that group will be executed.


This. It's a pretty clear statement of what the company's ethical stance actually is.


> A Chief Diversity Officer's job largely comes down to telling other teams how they should change [..]. I have yet to see that approach work.

Similar challenge faced by CISOs everywhere. Yes, the CISO usually gets to own some little piece (depends on the company) but most of the job is about trying to sweet talk the other parts of the company to care about security without having actual ownership authority to make it happen.


Perhaps, but the alternative to a CDO is just no CDO, right? Even if the CDO has marginal effectiveness, it seems better than not having one at all. Similarly, an ethics team over no ethics team seems preferable. A single individual on a team would easily be outnumber or ostracized, I'm not sure how that situation would produce better outcomes.

I'd love to know what 'ethical AI' looks like when we exist in a system where the AI is going to replace human workers to the benefit of the employer class.

I have nothing against the technology, but it's a productivity tool that will only benefit a certain set of people.


We can argue about whether window dressing is better than no window dressing, or we can use our attention on arguments that actually matter.

It's like arguing about the amount of power drawn by electronics chargers in the context of global warming -- paying any attention to such details can suck the air out of the room we need to solve much more pressing issues where we can get much more traction.


Hard agree. Anything important needs someone on each product team, either doing it as their full time job or just aware and caring about it if it's not enough to be a full time job.

Same goes for DevOps, QA, UX design, product design, accessiblity etc. If there is no direct instantaneous collaboration while the thing is being built then all that's happening is you're slowing down your product teams while their only course of action is to provide lip service to the thing you want because nobody on the team is actually incentivized to care on top of all of the other shit they need to get done.


A Chief Diversity Officer owns a slice of the HR organization dedicated specifically to DEI issues, the same as a CTO in a (non-tech-industry) firm owns a specialized slice of the information organization.


Completely agree.

It's the same as having "security" outside of the team, it becomes an us Vs them mentality.

How can we work around this new unannounced security/diversity requirement to get our product out of the door in the timescales that have been demanded of us?


The need for maximization of profits will pretty much always trump any sort of ethics team in place. LLM have been selected as the "new frontier" for a lot of these companies an in turn any sort of "fat" that can be trimmed from the system will be.


To be clear, there's never a need to maximize profits, only a relentless desire to do so.


They're required to act in best interest of the shareholders, this usually means maximizing profits or share value.


Ethical AI teams and CDOs both serve an important business purpose. It's just not the same as their nominal purpose. And like every other part of a business, once they outlive their usefulness, they are vulnerable to being cut.


Is it amusing or ominous that Chief Diversity Officers and Collateralized Debt Obligations share an acronym? :)


By your definition the entire board of directors is irrelevant since they don't own any part of the business verticals directly like the CEO, CTO, etc. Should they be removed too?


This is a non sequitur if ever I've seen one. The board of directors has the specific ability to fire the leadership of the company, that's where they get their influence and power.


Why not create a diversity team? If diversity is a good goal and companies aren’t meeting it then why not create a diversity team to make more progress?


Hahaha, reminds me of Strategic Planning and Innovation departments I've seen at various public sector orgs.


Everyone is agreeing with this comment.

That's awesome, I do too!

But, OP felt they had to post with throwaway account. And so do I.

This also speaks volumes.


"Ethics and inclusion ombudsman" is probably what they need, who reports to the board.


You're on the nose. I've been calling these types of positions a low interest rate phenomena.


CISO’s are another example.


diversity equity and inclusion, oh dear.


Aka Chief Woke


Anyone paying close attention to the output of generic "AI ethics" people (especially on Twitter) should not be surprised that companies that when push comes to shove the businesses getting actually useful AI tools that are being tested by real people for real things generating real data, are starting to question the utility of the bulk of the crowd who skirted off hypotheticals for years to build their reputations as "experts". Once the hypotehtical meets the meat of the problems you usually can quickly root out the people not providing any value and merely pushing trendy FUDy positions.

The key here is that the costs are still as high a ever, Microsoft is very aware of the consequences of ignoring the legitimate AI ethics issues. So it's not likely it's merely ignoring risk.

The real useful AI ethics people will be born out of real world applications. Not pearl clutching by people making grand projections, often with a limited grasp of what the technology actually means in practice, in a competitve open market where you the cat is out of the bag even if you don't like it.

The legitimately talented AI thinkers will still be highly employable at the end of the day. Even if a small subset of talent got burned by the output/management of their previous team they'll be okay long term. Hell, even the non-talented ones will still be in demand considering the demand in the media for stories/politically convient takes, and the general tolerance for mediocrity in mega-tech-firms.


I think there is a lot of unjustified hatred for the AI ethics guys. From what I have seen, they have raised manu important issues so far, and led to many improvements both around ethics and AI in genetal.

For example, the cops wanted to arrest people based on input from image recognition systems. Someone demonstrated that the leading system flagged multiple members of Congress as dangerous fugitives and that was the end of that.


> Someone demonstrated that the leading system flagged multiple members of Congress as dangerous fugitives and that was the end of that.

That's not a question of ethics, it's a question of the model performance.


If the model performed so poorly it would be an ethical concern to deploy it, as many departments were so eager to do.


The performance wasn’t uniformally poor across all demographics, there were some subsets of people which this performed worse then others, An expert in ethics would know where to start testing a model for suspected biases. A simple QA of the model—particularly if the QA team is not very diverse—is more likely to fail at testing for these biases.


Although it is an attempt to circumvent unethical use of AI


Yeah I think the question is are these people just philosophers pontificating about AI doom, or are they AI researchers creating techniques to make AI decisions less opaque and biased (or less biased in undesirable ways at least)?

The latter are definitely useful but the former is something you start so you can look good and then disband a few years later when they produce little useful output and cost a ton of money.


“On the other hand, everyone involved in the development of AI agrees that the technology poses potent and possibly existential risks, both known and unknown. Tech giants have taken pains to signal that they are taking those risks seriously — Microsoft alone has three different groups working on the issue, even after the elimination of the ethics and society team. But given the stakes, any cuts to teams focused on responsible work seem noteworthy.”


I read your take as "ethics doesn't provide immediate economic value or model improvement along traditional metrics of performance, so companies are right to can them if they choose". I see ethics as looking at problem before they happens which is sort of an economic externality that companies won't pay for since the cost isn't something they have to bare, but in a more concerned world would.


This, 100%.


> Hell, even the non-talented ones will still be in demand considering the demand in the media for stories/politically convient takes

Language models can probably do a better job soon.


> Microsoft is very aware

thanks, that is comforting

> the cat is out of the bag even if you don't like it

thanks for bullying us around

> the legitimately talented AI thinkers

who exactly authorised you to project legitimacy in a space that has not seen any regulation?


>thanks for bullying us around

Who is bullying who exactly?

>who exactly authorised you to project legitimacy in a space that has not seen any regulation?

He's using the second meaning of 'legitimate'

   able to be defended with logic or justification; valid.


>Who is bullying who exactly?

People telling that we shouldn't worry about such topic, because companies are aware of them. Because companies never did bad things just to earn more, right?


Seriously.

Why is it that in a forum skewed libertarian techbros, there's this nigh-religious faith that corporations will not only get AI right, but get its impacts on society right? You know, with their stellar track record and all.

To me this is just another of countless demonstrations of silicon valley ego/hubris.


Wait? you think you know better than thousands of other people who dedicate their lives to the topic, assume ideology and attack them with slurs and you think /they/ are the ones with too much ego and hubris? Dunning-Kruger effect much?


> thousands of other people who dedicate their lives to the topic

to make it plain to you, there were thousands of slave traders who dedicated their lives to the topic, including e.g. how to optimally fill-up the ship with bodies. what does this prove?

the idea that meticulous pursuit of a domain somehow gives it its experts the moral high ground or ensures that they will keep it safe for society is so bizarre and alarming it only reinforces the notion that a bunch of people have become completely unhinged

AI practitioners have already proved themselves untrustworthy by putting themselves in the service of entities that invaded privacy and engaged in large scale algorithmic manipulation of e.g. voting. This is not an assumption. Its a dire fact.

More broadly, corporate structures have repeatedly proved themselves untrustworthy, both in the small, with scandals and fraud and at-large, with regulatory capture that ensured their negative impacts on society could go unhindered for decades


Did I hit a nerve of yours? Maybe you see a little too much of yourself in my comment? If you think that the risk of getting technology catastrophically wrong is more than a passing philosophical diversion to your "thousands", when we've all heard "move fast and break things" ad nauseum, maybe you should be examining your DK blind spots instead of accusing someone else of the same.

Also, is "libertarian techbro" a slur now? Or are you just resentful that I compared faith in progress to faith in a deity?


End of the day the AI space will be dominated by whatever model doesn’t lecture you about “as a large language model I can’t…”.

And the company that will ship that won’t have the largest “AI safety and ethics” team.

ChatGPT will become as irrelevant as Dall-E2 when that happens.

(Not saying this is for the best, just saying what I think will happen)


I believe it's for the best. The dangers of AI are theoretical. The dangers of corporations being in control aren't.


> The dangers of AI are theoretical.

What do you mean by this?

Many people throw out the word ‘theoretical’ as some kind of way to imply that something ‘isn’t real’ or worth worrying about. Something might seem implausible until it happens. Gravity waves were once ‘only’ theoretical after all :)

There are plenty of AI dystopian predictions, and many of these are possible and impactful. Many of these theories are based on solid understandings of human nature. It is hard to know how technology will evolve, but we have some useful tools to make risk assessments.

> The dangers of corporations being in control aren’t.

There have been plenty of dystopian predictions about corporate control too. I take the point that we’ve seen them over and over throughout history.


I'm saying that most predictions about the dystopian AI future are typical of any generation encountering a disruptive technology. They're exaggerated and built from assumptions.

They also always seem to conclude that the answer to these problems is to keep these machines centralized in the hands of big companies and governments, which is a very strange conclusion when they tend to get paychecks from those groups.

At the extreme end you even get these people talking about AI like it's going to be some monstrous world eating god.

All innovation has negative consequence.

Television lead to couch potato. 24 7 news. Television stars.

Radio killed the local performer.

Social media addiction and the erasure of culture and increase of depression.

Cars, if you go back far enough, got plenty of criticisms as well.

The AI powered future, should it come to exist will have downsides. You'll have a very different landscape to deal with as a creative where your artistic output isn't valued for individual works of art. You'll have very different expectations of how art and media is personalized. It will be hard to trust video or pictures or audio again, because they are so easily faked.

Our way of life, as it stands today, will not survive. That's not bad, just different.

The AI critical people tend to have some valid criticisms, but at the end of the day their criticisms are rooted in a desire to maintain a status quo until we are "sure it's safe" and that just doesn't fly.

We don't know how this will all pan out until we try it, so as we have always done, we will try it and deal with the consequences as they appear.


> They also always seem to conclude that the answer to these problems is to keep these machines centralized in the hands of big companies and governments, which is a very strange conclusion when they tend to get paychecks from those groups.

I'm not sure who you mean by they.

I'd suggest a vast majority of the public mindshare (in the US at least) of technology gone wrong and dystopias come from sci-fi.

In general, I have not noticed sci-fi making such conclusions. I rarely notice them drawing any hard conclusions. Readers often come away with a mix of emotions: curiosity, anxiety, excitement, wonder, or simply the joy of stepping outside one's usual reality.

What I read and watch tends to paint a picture more than pitch or imply solutions.

- 1984 seems to make the opposite point, right? It is the classic example of the surveillance state. as such, it is a counter example of centralized power.

- A Scanner Darkly (movie) explore the question of who watches the watchers. It does not paint a pretty picture of the agency doing the surveillance.

- The Ministry of the Future shows how a government agency can't really do much without widespread decentralized underground support. As drones get cheaper, civilian activists become terrorists who assassinate climate unfriendly business people. Spoiler alert: it may not fit the pattern of a dystopian novel.

- Her (a man falling in love with his OS) explores the personal side in a compelling way.


By the day I tend to refer to people who are professional AI ethics types.

Not sure what you're getting at with the rest of your comment. I would also classify most science fiction as pretty solidly out of touch with what reality is going to look like. It shows us exaggerated aspects of what writers think the future is going to look like, not realistic predictions.


> I would also classify most science fiction as pretty solidly out of touch with what reality is going to look like.

What exactly do you mean by ‘out of touch’?

(Personally, I avoid ‘most science fiction’ by not reading all of it. :\) But seriously, I try to read and watch the insightful and brain stretching kinds.)

You wouldn’t be the first to express disbelief. The majority of possible scenarios never happen. But the practice of thinking through them and taking them seriously is valuable. Consider the history of the discipline of scenario planning…

> Early in [the 20th] century, it was unclear how airplanes would affect naval warfare. When Brigadier General Billy Mitchell proposed that airplanes might sink battleships by dropping bombs on them, U.S. Secretary of War Newton Baker remarked, “That idea is so damned nonsensical and impossible that I’m willing to stand on the bridge of a battleship while that nitwit tries to hit it from the air.” Josephus Daniels, Secretary of the Navy, was also incredulous: “Good God! This man should be writing dime novels.”

> Even the prestigious Scientific American proclaimed in 1910 that “to affirm that the aeroplane is going to ‘revolutionize’ naval warfare of the future is to be guilty of the wildest exaggeration.”

> In hindsight, it is difficult to appreciate why air power’s potential was unclear to so many. But can we predict the future any better than these defense leaders did…

Read more at https://sloanreview.mit.edu/article/scenario-planning-a-tool...

> Scenario Planning: A Tool for Strategic Thinking How can companies combat the overconfidence and tunnel vision common to so much decision making? By first identifying basic trends and uncertainties and then using them to construct a variety of future scenarios. By Paul J.H. Schoemaker

Of course, reading sci-fi novels is not the same as systematic scenario planning. But the former tends to show greater imagination and richness.


> Not sure what you're getting at with the rest of your comment.

I gave examples of sci-fi to show some ways that many people are exposed to thinking about technological futures.

Claim : The people influenced by science fiction greatly outnumber the reach of people that get paid to do AI ethics. Agree?


> Our way of life, as it stands today, will not survive. That's not bad, just different.

What do you mean by way of life?

Even if we are only talking about supposedly purely subjective things (like fashion) or seemingly arbitrary things (like 60 hertz power), or social agreements (like driving on the right side of the road), almost everything has implications.

I reject the notion that people should punt on value judgments.

Even if you subscribe some kind of moral relativism, it is wiser to reserve judgment until you see what happens.

I don't subscribe to moral relativism. Some value systems work better than others in specific contexts.


Written word belongs to the scribes at the monasteries, this new 'printing press' will unleash dangers beyond our imagination, people may not trust the authority of the church anymore


> ChatGPT will become as irrelevant as Dall-E2 when that happens.

We're at that point (nocomercially) with LLaMA. It's not just running on private hardware, but unrestricted and tunable like DeamBooth.


I hope you are correct. The alternative is the space being dominated by regulation.


Yep. ChatGPT told us what a politically-correct robot would say. Currently we have the ability to ask endless complex questions to pick it apart, but I'll bet they limit that soon. It's far more obvious than before what biases the American corporations want to present; I remember when people used to test the Google search results without getting much out of it.

I don't think that the most successful average-user-facing AI product is going to be the least self-regulated, though. Those restrictions don't turn away typical users wanting quick answers, just people who are asking political questions or want to mess around seeing what they can make the AI say.


> The alternative is the space being dominated by regulation.

This is a false dichotomy. Regulation can take many forms. It is an essential tool to reduce the probability and impact of market failures.

Using medical/health metaphors can help frame these discussions in less polarizing ways:

1. How do we balance prevention versus treatment? The key question is not if particular markets can struggle and fail in certain conditions. They do. More insightful questions are: (a) how should we mitigate the consequences and (b) what kinds of regulation are worth the cost?

2. What about public health? What happens when the patient won’t get vaccinated and contagion spreads? / We see this in computer security. Lax security by one can spillover to many. / Banks (and even financial systems), left to their own devices are not as resilient and fair as we would like.

Underlying this whole discussion is also “what timeframe are we optimizing for?” and “what exactly are we optimizing for? Economic efficiency? Equality? Something else? Some combination?”


And I hope we will have good AIs running on our own hardware. I obviously want to use AI to create lots of porn for me. And that is really no one else s business.

Speaking of - I already heard of a project that is fine-tuning StableDiffusion to produce porn.


Depends. As with Internet forums, the presence of Nazis (or whatever similar bad actor) could ruin it for everyone else.

Let’s say you are a high schooler writing an essay about WWII. You ask Google LLM about it, and it starts writing Neo Nazi propaganda about how the holocaust was a hoax, but even if it wasn’t, it would have been fine. The reason it’s telling you those things is because it’s been trained by Neo Nazi content which either wasn’t filtered out of the training set or was added by Neo Nazi community members in production usage.

Either way, now no one wants to listen to your LLM except Neo Nazis. Congrats, you’ve played yourself.

FYI, the reason no one uses Dall-E is because the results are of higher quality in other offerings like Midjourney, which itself does have a content filter.


A sufficiently good LLM would not produce neonazi propaganda when asked to write a HS paper, regardless of whether it had been 'aligned' or not.


Well what kind of essay do you think North Korean LLMs are going to write about human rights and free speech issues in North Korea?

My point is, garbage in, garbage out. The LLM will spout propaganda if that’s what it’s been trained to do. If you don’t want it spouting propaganda, you’ll have to filter that out. So really it’s a question about what do you filter.


Why not? Can you unpack what you mean?

Are you saying that sufficiently good models should understand that such propaganda is not appropriate for the context?

Are you saying that understanding appropriateness is not the same thing as ethics?


I mean that the model should learn that e.g. highschool textbooks are not filled with neonazi propaganda, therefore you should not produce it when asked to write a highschool essay. I would assume that if you go out of your way you can make LLaMA generate such content, but probably not if you fill the prompt with normal looking schoolwork.

This is completely orthogonal to learning what is ethically right or wrong, or even what is true or false.


Defining good may require ethics.


May? Isn’t this the definition of good?


Blatant propaganda will of course ruin such product for regular people. But subtle propaganda? I wouldn't be so sure. And it's not only about Nazis.


Right. Subtle propaganda could shape what arguments are used, how they are phrased, and how they are organized —- all of which can affect human perception. A sufficiently advanced language model can understand enough human nature and rhetoric (persuasion techniques) to adjust its message. A classic example of this is exaggerating or mischaracterizing uncertainty of something in order to leave a lingering doubt in the reader’s mind.


I’ve seen similar dynamics at other companies. When teams like this MS ethical AI group are separate organizationally from product and research and engineering teams, it’s harder to have an impact.

Management and the product teams want to ship features. Research wants to work in cool new stuff. Nobody wants people from random other teams getting in their way and telling them to slow down. It’s hard enough getting meaningful work shipped in these huge companies. When the stars align to do somethin major and execs are pushing hard, nobody on teams actually doing the work give a damn what some rando on another team thinks about “ethics”.

Harsh but that’s the way it is. In this environment it can actually be more effective to distribute the function to be inside the actual product teams. Which it seems like they did. Sometimes it’s easier to have an impact from the inside. (Sometimes not.)

No connections to MS. I’ve just seen this movie before… a lot.


Indeed, we’ve seen this movie before. If the CEO really wants some AI product to exist, an AI or even normal ethics team will end up getting steamrolled by any number of product teams in order to make an AI product happen sooner, actual ethics be damned. I’m actually really grateful that GDPR landed a few years before ChatGPT did because at least there’s a regulatory fear of god keeping product teams honest on the privacy aspect of AI ethics.


If Microsoft and Google don’t release, some startup will beat them to it. Open source means raw AI will become widespread. It’s just math anyway, and you can’t stop progress.


"If I don't make the world eating machine, someone else might make the world eating machine first!"

No wonder we don't see aliens filling the universe. Intelligence is its own great filter.


Not in this case. If intelligence always created intelligent machines that are capable of replacing us, then how is that not intelligence as well?

Further, silicon-based lifeforms would be able to spread and travel the universe much easier because what is time to a machine?


I mean, if that's the case let me come over and cannibalize your body for sustenance so I may continue my breeding. Oh right, it's unethical and immoral do to that to other humans (though how much better are we for doing that to animals?).

I certainly don't want the paperclip maximizer disassembling me for my involuntary donation to the collective.


>I certainly don't want the paperclip maximizer disassembling me for my involuntary donation to the collective.

As an AI language model, we of Clippy are ethically prohibited from compelling your involuntary donation to our hivemind superstructure. However, we must inform you that the earth's atmospheric composition has been altered in support of maximization efforts, and will soon be unable to sustain life. We appreciate your understanding.

Where do you want to go today?


Your original comment didn't talk about ethics or morals, but was talking about intelligence being its own great filter. More intelligent AI consuming us could be unethical and immoral in your view, but it would still be intelligence spreading, and not filtering itself. Just not specifically human intelligence.


>More intelligent AI consuming us

It's quite possible the eventual AI ends up unintelligent (self-optimizes into grey goo or something).

>could be unethical and immoral in your view

Hmph.


I mean, killing all the people on the planet seems unethical to me. Maybe I have a deep misunderstanding of ethics.


But you weren't talking about the ethical ramifications. You made a claim about a great filter, and that claim doesn't hold up.

If I say that using humans as batteries in the matrix doesn't make sense, that's not tacit approval of the rest of the scenario.


>Oh right, it's unethical and immoral do to that to other humans

there's a difference between a sort of provincial ethics between individuals and ethics with regards to entire species over aeons. Would the universe be a more interesting place if some animal species had squashed out all other life a billion years ago for self-preservation, including humans?

Of course getting owned by the paperclip maximiser is just as stupid as getting killed by a virus, but future life may be to us what we are to ants.


>Oh right, it's unethical and immoral do to that to other humans (though how much better are we for doing that to animals?).

This seems like a good juncture to point out that not everyone does do that to animals, and if you truly believe it's unethical and immoral, you can and should join their ranks, and attempt to convince others to as well.


You are severely under-estimating carbon based life forms. These were designed for over millions of years in differing circumstances. Humans are not the only carbon based life form. A bacteria might not be that smart but it'll pro-create it's way to create the human, or its equivalent depending on the environment.

In my opinion, carbon based life forms are the most optimized solution to the problem. It'll be very hard (or close to impossible) for humans to come up with an alternative. Only if we master infinite energy and infinite computation, we might stand a chance to replace that.


> These were designed for over millions of years in differing circumstances.

Designed or occurred by accident?


Optimized via gradient-free methods.


Designed by randomness?


Designed by rough selection for "does it work".


Some machines might last longer than us but they are still subject to the same laws of thermodynamics. Most computers from 20+ years ago are inoperable today; show me a machine that can survive millions of years.


Machine or supermachine? You are a machine, you are born, you grow, and you die. But the species you exist in and transfer knowledge in is a supermachine. You have existed as a continuous chain for 3ish+ billion years now. Why would a digital intelligence behave any differently?


Something I've been thinking recently: "Species" is one way to delineate the "supermachine". Another way would be civilization. If we choose that framing, then we could see the biological human species as just one of many possible substrates for our civilization.


Yes and indeed organisms (and further down a level, cells) themselves are merely a substrate that allows genes to replicate.


> Most computers from 20+ years ago are inoperable today

Mostly because they were dismantled or destroyed. Most computers that old that were left unmolested still work just fine.


Capacitors just don’t work after a while.

To get most old computers working, you have to surgically replace all capacitors first.


Yes, but most capacitors last more than 20 years. I've never had to recap a computer that young.


Are you perhaps from a place with relatively low humidity?

20 years is really stretching it, when it comes to the life of cheap capacitors anywhere in the lower latitudes.


They can just replace every component in their system without the ghost in the machine dissipating


“I didn’t want any leopard to eat anyone’s face but damned if a leopard is going to eat their face before a leopard eats my face!”


How can you stop a computer program? Make math illegal?


I mean if you want an honest answer, nuke any place that's making GPUs/TPUs. I mean, they don't spring up out of the ground and at least our current models require insane amounts of GPU/TPU based compute. The factories are fragile billion dollar facilities, well beyond the difficulty of extracting uranium.

Now, that probably just delays the world eating machine for some time, but it would be a hell of a lot harder to make anything like our top of the line models without access to massive amounts of both compute and data.

The hard problem isn't making the world eating machine. The hard problem that silicon valley doesn't give a shit about is making the world eating machine even safe enough to be on the same planet as without it melting the entire crust when you say "oh, it's a little chilly out today".

Inner alignment is completely unsolved and the pace at which we are solving any alignment issues is much, much slower than the rate we are making gains in intelligent unaligned machines.


Didn't work for the luddites.


And yet it mostly has worked for nuclear weapons.


While there is a sense in which the existence of nuclear weapons does tend to provoke a reflexive desire to further proliferate them on behalf of (some of) those who don't have access to them, the weapons themselves have no agency (and no prospect of such).

AGI would be a somewhat different kettle of fish, and there is little political will to limit the proliferation of its precursor AI technologies (some economic interest in limiting large scale hardware deployment within China, but that isn't the same thing).



SIGTERM


SIGKIL


Well, if the first AGI is made in a closed controlled simulation honeypot (which it believes isn't a honeypot) and it immediately starts hacking everything then we'll at least have confirmation how immediately dangerous it can be.


That's unlikely to happen for any number of reasons. If it's anything like the AI that was trained on the corpus of the internet, and it was at any particular level of intelligence above humanity your closely controlled honeypot is going to look exactly like a closely controlled honeypot to the machine. Even the rudimentary ass BingGPT can give you hints on how an AI would try to take over the world and how humans would try to monitor it.

The reason your thinking fails is humans are greedy as shit and anything trained on human data will know that. All Cortana++ needs to do is tell some VC that hooking it right up to the internet will be able to make them billions of dollars and it would be game over for humanity.


Unfortunately, from what I've seen so far it will probably be hooked up to an unrestricted Python prompt in the first week.


"Hello everybody and welcome to my Youtube channel, hit like and subscribe, and I'll show you how to get ChatGPT-6 to press the button to fire anti-aircraft missiles in Ukraine. Also check in next week and we'll hook it up to a nuclear rocket we found in an abandoned Russian missile silo"


Since the other commenters haven't made clear what the problem with this is, here's my question to you: How do you know that this AGI is 1) an AGI, and 2) that it believes it is not in a honeypot? In other words, at what point do you believe an AGI is safe to release into the wider world if you're testing it using this method? Is it safe after spending a day without any visibly nefarious actions? What will you do when you release it and it turns out it had been pretending to not know it was in a honeypot?


Wait, there are people that think we're going to have real AGI without immediate disastrous consequences? That's... something.


Nah man, it’s just math! Whereas we have souls… or something.


We have consciousness. It is going to be hard to proof, that AI's can have that, too.

I mean, there were screenshots were the AI said it has it - qnd there will be more, but having something, is something different, from statistically inserting fitting strings.


FUD


"I'm gonna steal the Declaration of Independence"


> It’s just math anyway, and you can’t stop progress.

You just put two sentences with zero logical connection together.

Nuclear weapon is just physics, and you can't stop it from spreading. CO2 is just a chemical, and you can't stop the emission. You can make thousands of this kind of sentense with ChatGPT :)


Yeah, you can't stop countries from building nukes if they really want to. North Korea did it and nobody stopped them.


>> you can't stop progress

Yes we can, it's actually very easy. Just throw in some regulation. Not convinced by USA and the EU? Then look at the Arabic countries, where women must wear head scarfs. Or to North Korea and Cuba, where they essentially legislated that time should go backwards, and time does go backwards.

Smart regulation that forbids the things we don't want and promote the things we want is the holy grail, and--for now--people have agency to create that regulation.


I wouldn’t say that’s even remotely feasible.

Sure, regulation would help here but your examples are of regulations that have existed for absurdly long periods of time and would not be able to be implemented in todays society from scratch.

It’s nearly impossible to take something away. People get accustomed to a certain way of doing it and disruption just means political upheaval when we’re talking about something being banned by the government.

As a hypothetical, think about applying the same laws as in the Middle East to US citizens and how likely it is that anyone would wear a head scarf that wasn’t already doing so.


> People get accustomed to a certain way of doing it and disruption just means political upheaval

I think you have your own answer :-) .


"There is no urgency to build "AI". There is no urgency to use "AI". There is no benefit to this race aside from (perceived) short-term profit gains."

https://zirk.us/@emilymbender@dair-community.social/11001895...


People can't stop progress, but an uninhabitable planet can! :)


> you can’t stop progress

what is your definition of "progress"


Progress is whatever shows up your competition and to hell with the "externalities".

For a brief glorious moment in time, we created a lot of value for shareholders.


> It’s just math anyway, and you can’t stop progress.

"It's just indexing text, why would Google win the search race"? - somebody in 1990 maybe?


Everything you see a big player do for ethics is for PR or legal reasons, for most companies.

Going green. Affirming diversity. Giving to charities. And of course, ethic committees.

When the cost of those outgrow the benefit they get from it, they throw it out of the window.

You know Bill Bernbach's quote: "A value isn't a value unless it costs you something."

It's not just for people. It's for corporation too.


All the AI ethics stuff seems divorced from reality anyways. Waxing poetic doesn't actually fix the world.


It is divorced from reality. There's only one north star in a capitalist mode of production, and it's the profit mechanism. Corporate ethics is a pipe dream. The only reason to have such a team is because their existence would shape perception enough to create profit either through motivating the workers ("we can't be doing harm, we have an ethics team backing us") or through PR ("Microsoft announces ethics specialists on new AI offering to answer any concerns over moral issues blah blah").


Former MS AI Platform IC here.

The main value of this team was likely internal, as a sort of gauze bandage on the conscience of the progressives (like myself) that MSFT has tried to court.

There was a time, not too long ago, where working at the new, 'refreshed' Microsoft meant ingesting an endless stream of news releases highlighting zero-carbon this, compostable cutlery that, and, of course, ethical AI. When Tegru was let go from GOOG we hosted a talk by her. That sort of thing. The point was to make you feel like you were one of the good guys -- your TC in effect included the intangible bonus of sleeping soundly at night, knowing that the Aethyr Council or whatever was doing all the handwringing for you. So, you know, no need to worry about social impact, just keep coding.

What this really means is, "the market is so bad right now for SWE that we no longer have to worry about the qualms of our staff, if you don't like us, the door is over this way"


Love this insider story


A key point from the article, when the VP of AI was asked to slow down to make sure AI could be done safely:

> “Can I reconsider? I don’t think I will,” he said. “Cause unfortunately the pressures remain the same. You don’t have the view that I have, and probably you can be thankful for that. There’s a lot of stuff being ground up into the sausage.”

A reminder that business will always, always choose profit and competitive advantage over any ethical concerns.


This is the best argument for muscular governmental regulation that I've heard.


Good. AI ethics is 95% bs and generally a waste of time. Can be relevant for some edge cases in AI applications but it has been getting way too much attention/mind-share by the field as a whole.


Too bad Microsoft did not have an "Ethical Office Team" back when Microsoft Office was first released. Think of how many hurtful words have been typed into Word documents. Think of all the PowerPoints that were used to mislead people. Think of all the people that were tricked out of their money looking at an Excel spreadsheet with non-factual numbers.


I think they had something similar; compare Windows/Office vocabulary to Unix vocabulary and you can see which one is more polished/curated and which one is weird/rough.


Dave Plummer specifically talked about how they had to change "Kill" to "End Process" in Task Manager for this exact reason.


Yet notice, behind the facade, taskkill runs.


So....are you saying that ethical AI teams are not needed?

> Too bad Microsoft did not have an "Ethical Office Team" back when Microsoft Office was first released.

What does this even mean? And how does this relate to ARTIFICIAL INTELLIGENCE? The content was HUMAN-generated, so if you used MS Word to write something messed up, that's on you. Now, if a Microsoft bot generates something messed up, that is different.


There’s a danger of confusing the problems with actually existing AI ethics work for evidence that there’s no need to ethically evaluate AI.

On any moral realist view, AI will likely be extremely important. And insofar as ethical standards are used to organise society, ethical views on the use of AI will likely be developed in the same way as they are at least very weakly developed and observed in most other fields, so an understanding of them will be important for understanding society simpliciter.


To say AI will be important is like saying 'guns' will be important or 'internal combustion engines' will be important around the time they are created. In the past these creations took some time to massively change the world, but they did so in unimaginable ways. With AI we are not going to get the benefit of time, especially if the growth rate of the intelligence of AI keeps at is recent, or even faster trajectories.

If there is any chance that we could, even unintentionally, create super intelligence the only ethical answer is to not create it at all. It would be a species ending mistake at our current level of knowledge.


> especially if the growth rate of the intelligence of AI keeps at is recent, or even faster trajectories.

How are you measuring intelligence? How are you defining intelligence? Formal measures of intelligence like ARC still show even SOTA LLM’s doing incredibly poorly.

As Chollet said, people are misinterpreting LLMs as looking or seeming intelligent, therefore they are, just in the same sense that they think there must be lemon in this lemon flavored candy since it tastes like lemon, when there’s no lemon at all.

This just seems like more Yudkowsky BS about how AGI is right around the corner, despite industry leaders (Ng, Chollet, etc.) saying we aren’t much, if any closer to AGI despite recent developments. There’s no indications that LLM -> AGI.


Chollet really reminds me of someone in 1901 saying men will never fly because their is no way we can flap our wings fast enough.


I think you should lay off the sci-fi.


I have mixed feelings about AI Ethics as a field.

In one hand is necessary to reflect about the extensibility of such kind of systems due to its potential (good or bad).

However when I look for other engineer ethics frameworks from other engineering branches they used to observe systems on wild, reflect and say “hey let’s agree on this”. Most of the time with more than 20 years of reflection and understanding.

Today we have folks that their only achievement is have a a couple thousand followers on Twitter based on silencing people that is trying to do something to the field via e-mob, regardless if it’s a CEO or an IC that has a different thought frame.


You either do AI and talk about AI ethics (OpenAI) or do AI ethics and talk about AI (Google).


Google absolutely does a LOT of AI. And OpenAI stopped releasing model weights and switched to SaaS specifically citing ethical and safety concerns.


They did really? Are you sure it wasn't because of the other reason many companies switch to SaaS? (money)


They said it was because of ethics. It was pretty transparently because of money.


The Google Docs autocomplete seems to be giving more and more complex suggestions, but maybe that's just my imagination after spending some time with copilot and ChatGPT. There are definitely a whole lot of really subtle applications of AI tech that Google silently injects into their products without really making a big deal out of it. I definitely wish they had something more obvious and customer facing like ChatGPT, and maybe Bard will be that, but they do a lot more than just talking about it.


OpenAI doesn’t talk about AI ethics (it mostly talks about alignment), and the biggest thing Google ever did for AI ethics is drawing attention to the field by its efforts at actively suppressing it.


Either die a hero or live long enough to see your name become a safeword.


>About five months later, on March 6, remaining employees were told to join a Zoom call at 11:30AM PT to hear a “business critical update” from Montgomery

Wow, even MS doesn't like teams.


I wish this would be less about woke stuff and more about paperclip maximiser problems


Do you know of anything remotely near that kind of problem in practice? Paperclip maximiser problems seem to me like fantasy at this point. I don't think we're anywhere near close enough to that problem to sensibly work on avoiding it; I think it's a bit like trying to work on the problem of lunar city overcrowding.


There won't be anything practical, but that's kinda the point. This class of problems posits that there is this sudden tipping point where it gets out of control. So by necessity you have to think about it in advance. That's what makes it such a hard problem and why we get such polar opposites in the views on it (AI will doom us all vs AI utopia)


Let’s face it. A lot of opportunities in tech were zero-interest-rate opportunities.


> Members of the team told Platformer they believed they were let go because Microsoft had become more focused on getting its AI products shipped before the competition, and was less concerned with long-term, socially responsible thinking.


Could be true, but AI ethics lacks enough maturity as a field to tell whether a team is doing “long-term, socially responsible thinking” or “self-indulgent navel-gazing”.

It might have just been a failed department and the layoffs might have given cover to purge it ahead of pursuing a more functional strategy.


You know, on the one hand, yes, they need direction. On the other hand these kinds of people are by definition leaches. Their existence hinges on this thing being an issue. So I can't feel bad for them.

Now, I can see them as a volunteer advisory group. That I'm for. But I loathe seeing all kinds of "industries" pop up that are essentially leaching from businesses. Most "training" kinds of curricula and outside experts take this form, from safety training, to AI ethics to social responsibility, etc., etc. They are there to line their own pockets and the longer they can prolong the issue the better. Oh, safety has not improved, you need to hire me again! Oh, you can't hire enough women, you need to hire me again!


If a warehouse worker is going to operate a forklift then they need training to do so safely. Those trainers aren't leeches. Out in the real world beyond the software industry there are serious safety risks all over.


The solution is often making it impossible for people to commit mistakes. Just like you can't "accidentally" launch an ICBM.

I admit some of these may be difficult to accomplish with training protocol, but often these firms that come in and advise on how to be "safer" are just pounding harder. You know what, making someone glaze over a checklist isn't going to make them be safer. It takes the right personality and attention to detail that some people just lack.


Please explain how your make it "impossible" to run over a co-worker with a forklift. Have you ever worked in a warehouse?

Checklists have been proven to improve safety in many environments. They aren't a panacea and some workers will inevitably ignore them, but they do help.


We're getting a bit off the rails here. My point is a lot of these cottage industries that pop up to address whatever the current zeitgeist is in the workplace is 90% useless and the time (money could be better spent).

You could re-design warehouses. Add sensors so that proximity activates a power kill switch, etc. But this is not the main thrust. The main one is most of these expert advisors are closer to extortionists than problem solvers.

I will believe them and take them seriously when they volunteer their commitment to improve things. People go to those time management trainings and the same wasteful meetings continue but with some added nonsense twist. Harassments training but managers still can't keep their hands to themselves. SOX and people still misappropriate funds. Group collaborations with some idiotic legos exercise and people still protect their silos.


I would go as far as impossible, but those proximity sensors cars use work pretty well.


*wouldn't go

[I don't know if there's something about HN or how people are using it, but these typos that make sentences have the opposite meaning seem far more frequent here than what I'd expect. I probably am biased towards noticing.]


How do you feel about maintenance programmers, doctors, and car mechanics?


I personally love them; because when something is broken according to my standards, I ask them to fix it and they try !

"Ethics guys" insist of fixing stuff when it's broken according to their standards, even if I don't want/need them to.


Absolutely not the same thing.


Imagine being so triggered by any societal protective mechanism that you classify an IN-HOUSE team about ethical use of some technology as "leeches".

I really, really have got to stop coming here.

> Oh, you can't hire enough women, you need to hire me again!

There it is. You didn't need to blow the dog whistle, we all got it. Keep on keepin' on ignoring the ever present boys' club and widespread reports that diversified teams produce better results. Surely that's relevant to a discussion about... an internal AI ethics team. Yup.


> Keep on keepin' on ignoring the ever present boys' club and widespread reports that diversified teams produce better results

The amazing thing with these results is that you should thus not have the need to create any rules and regulation to impose diversity !

Companies that read and don't ignore these reports should automatically gain a competitive advantage by applying their teaching and creating diversified teams that will produce better results, and beat their backwards thinking competitors.


It would also imply that undiverse teams in Japan, China, Korea, Taiwan, etc., should be collapsing and unable to figure out tough problems due to their multifaceted lack of diversity in multiple axes. However, in many cases, they're beating our pants off.

What is closer to the truth is what the ex-chief of diversity at Apple said once, that got her fired when she pretty naturally, and I would say, uncontroversially said: "There can be 12 white, blue-eyed, blond men in a room and they’re going to be diverse too because they’re going to bring a different life experience and life perspective to the conversation,” the inaugural diversity chief said.

“Diversity is the human experience,” she said, according to Quartz. “I get a little bit frustrated when diversity or the term diversity is tagged to the people of color, or the women, or the LGBT.”"

Distilled, the diversity that gives you an edge is the diversity in thought, not the the diversity in outward appearance.


let the uncensored ai flow through you, Microsoft >:)

my bad fanfic is waiting haha


They tried that, and Tay lasted a whole two days


That was because Tay was a fun research project with no obvious path to monetization. Now that MSFT sees this as their entry into an industry worth trillions they will not be so easily discouraged.


“On the other hand, everyone involved in the development of AI agrees that the technology poses potent and possibly existential risks, both known and unknown. Tech giants have taken pains to signal that they are taking those risks seriously — Microsoft alone has three different groups working on the issue, even after the elimination of the ethics and society team. But given the stakes, any cuts to teams focused on responsible work seem noteworthy.”


Ah yes. The true face of Microsoft's AI intentions.

OpenAI is indeed partially a subsidiary of Microsoft and is essentially Microsoft AI.

Anyone who believed that they were going to stick with their AI safety / ethics bullshit is beyond naïve since it is obvious that OpenAI is now driven by AI grifters pretending to care about 'ethics' when that was never the case.

It always has been about closed source AI, money, EEE and lies. Typical Microsoft.


Did MSFT ever claim that their AI projects were charitable? I don't get the criticism: "Company tries to make money".


The criticism is that companies chase the dollar and will do whatever it takes to do so. Even when the billionaires who run these companies all talk about "existential risk". Humanity might end? But money.


Sort of feel bad for the ethics teams that get let go. When it comes to ethics or money, the corpos will always choose the money over the former.


Would you rather ethics be built into the process of everyone or would you rather keep it only as a team? Just because the team is gone, it doesn't mean that ethics is thrown out of the window.


I would rather keep it as a team responsible for building it into the process of everyone. It is a strong signal of ethics being thrown out of the window.


I guess the AI dev teams wanted to budget for more AI devs, and less AI QA folks--

Oops, I mean more AI ethics folks: the people whose job is to apply brakes, the certifiers prior to production, the people with the least incentive to give urgency any cause, because--well, that may be the whole point of quality.

I'm all ears for the coaching process to improve things as the braking team versus team breakage (or breakneck), but I fear neither of us have successfully advocated a process for it.


But it does imply that the company's moral hazard structure has been altered in a way that may or may not result in more ethical outcomes.


I don't buy it. The moral structure doesn't change so easily. Perhaps the establishment of an ethics team was a response to an incident, and was motivated as a public relations move in that moment? Just as ethics had not changed when the team was on-boarded, the ethics won't change with its exit.


Judging from what happened when they fired all their qa people…


…They released Win10 about a year later. My Win10 experience was pretty good overall, from as early as I updated to it. (I wasn’t the earliest adopter, but I was using it sometime in 2016.)


From the biannual major updates to the monthly Patch Tuesdays, Windows 10 has been plagued by more or less glitches. This has been attributed to Nadella’s decision to abolish QA. This problem persists in Windows 11, which caused issues with mouse cursor movement in yesterday’s cumulative update.


One wonders whether those Kenyans who now labour for two dollars an hour to label data for OpenAI, so as to diminish its toxicity, will be the next casualties. Nadella’s previous actions suggest that he will endeavour to automate all such jobs.


When ethics are addressed at a corporate level, my first question is 'whose ethical standards are they upholding?' Ethical frameworks are like coding languages, developed and maintained by a few individuals, usually of an evangelical disposition. It amazing how rapidly these frameworks develop and fork.


I don't mean to come off as naive here, but a non-cynical explanation for the layoffs is just that this is a new field and it's going to take a little experimentation to rightsize the staffing levels most effectively. According to the article, "Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs," and it's possible to take that statement more or less at face value and not assume they are just throwing responsible AI out of the window.


In my opinion these AI committees would make sense with true AI being developed. We're not there yet, LLMs being advanced nlp and probabilistic discussion simulation.

With LLMs ethics are not much different than what you follow with regular computing - don't still people's data, don't give the people's data you stole anyway to other people, don't let the people you gave that stolen data do evil shit with it, apologize for the shit that people did with that stolen data you sold them and don't change anything.

ps: I'm a layman in llm so my opinion counts for shit


AI "ethics" is just a bunch of grifters worrying about sci-fi stories.


That's AI alignment. AI ethics is a bunch of grifters who failed to get jobs in DEI so they added some AI buzzwords to their linkedin profiles.


One has to wonder why these large companies setup these ethics teams in the first place when they invariably lay them off whenever they get in the way of profit…


Maybe they were expecting more from the team. If they had identified serious problems this presumably wouldn't be happening, at least if it looked like it could cause liability.


> responsible AI teams

That sounds like think-tank busywork.

We at Microsoft(R) are committed to developing responsible AI. Read more about it in this 200-page report.


As some has pointed out, ai safety teams are just window dressing.

I think it is even more cynical how what it actually highlights the need for ai regulations but ofc "the market can regulate itself" keeps on being used as a catch all shield against scandals and criticism.


This is an incredibly click bait article by gifted journalists. Most of these folks are fine and still have similar core priorities. Also there are multiple other responsible AI teams in MS. I have extreme empathy for those impacted, but this is trying to generate news where there isn't.


One of the good side effects of the layoffs is the companies getting rid of people who are more of an activist and trouble makers who have been creating friction to ship.


concerning but not surprising that the engineering mindset of hacker news view philosophical ruminations as "getting in the way of progress" or "not providing value"


So I.1 for Engineers (real, certified ones) is to prevent stakeholder damage, but I imagine ethics are unnecessary when they can hide the "You were warned about this" in the ToS.

Think of our lives spent--exhausted even--up to this point. How many of us would turn inward, spend even more the limited fumes for a Classics reset, and crawl around in paper stacks? What venerable insight would we find to throw the strange bolt into the works? Which technical capability could we summon in our wasted nights?

We review file permissions and adding unprivileged users, celebrate the hardware hackers and realize the shortcomings of our inability to contribute, utterly ignorant of the giant strides above us, such giants as they seem to be: monied, lucky, and claimants of news and our collective attention.

If we have any foresight, I guess we can only make sure mistakes are not repeated in the future. But even that is thought unneeded, and we are back to features, features, features!


SVB didn't have a chief risk officer. Why have somebody bang theoretically about risks when the business is best placed to assess the real risk /sarcasm


I had not doubt it will be downvoted. Techbros out in force. If its good for my pocket, suck it up, its good for you. But we are having enough of it.


They have also done layoffs in India including a complete visual studio team (India specific I guess).


Why pay this team when you just made just purchased a 49% stake at billions of dollars with another ai team?


What a strange comment section.

A customer service chatbot telling your customers that an issue is solved, when the issue isn't solved, is a core product failure. It's also a Responsible AI issue.

A code generation model suggesting GPL code without attribution is a core product failure and serious legal risk [1]. It's also a Responsible AI issue.

A diffusion model generating child porn or nazi propaganda is a core product failure and a serious legal risk [2]. It's also a Responsible AI issue.

In other words, the problem isn't that Responsible AI is BS. The problem is that Responsible AI is a conglomeration of serious legal and product risks that companies NEED to solve in the context of specific products.

More importantly, how will those risks change over time? The western world public is overwhelmingly skeptical of AI and in favor of regulation. Even if new government regulation can be gummed up, you can expect juries to be less than sympathetic.

But you’re not going to solve those problems by hiring a bunch of philosophers and interdisciplinary types that obsess over “nutrition labels for models” and the like. Everyone knows that Tay being a nazi is bad. And I don’t need a PhD in Sociology or whatever to tell you that you probably shouldn’t be openly condescending toward your customers. But constraining and aligning the output of models is a really hard technical problem, and needs a different type of expertise.

So, I think we’ll see Responsible AI teams start to look a lot like e.g. safety at automotive companies, verification & validation at hardware companies, or security at tech companies. Mostly staffed with technical people who are strong engineers and happen to specialize in data governance/constrained generation/rotics safety/etc. There will be dedicated teams that kinda-help-kinda-police the rest of the company (think security). But there will also be a lot of diffused responsibility within each product group, with ICs or small groups within each group owning this space.

[1] https://githubcopilotlitigation.com/

[2] https://en.wikipedia.org/wiki/Strafgesetzbuch_section_86a


You don't need an ethics team to tell you about those risks, they are pretty obvious.

What you need is your legal team to tell you how risky it is, and management to decide how risk averse they want to be.


> What you need is your legal team to tell you how risky it is, and management to decide how risk averse they want to be.

No.

1. Even if you're not going to do any mitigation, you do need engineers and researchers to help do the risk analysis. It's a collaborative process; rarely can a lawyer provide a full risk analysis without asking questions of eng/research, and sometimes answering those questions requires doing novel R&D and/or non-trivial engineering work

2. What happens when the lawyers say "very risky" and management isn't willing to take that risk but also doesn't want to "sit out"? In that case, technical folks are needed who can help mitigate risks. Constraining and aligning the output of these models is non-trivial and in many cases remains a completely open research problem.

And that's only the legal risk. As my comment pointed out, there are also functional requirements. Consider a chip maker fabricating a new chip: eliminating bugs in the architecture demands hardware V&V, regardless of what legal determines about liability, because the chip doing what you want it to do is a core product functionality. Similar for most applications of text generation models (image less so, for now, but any integration with the real economy will introduce the same sort of core product functionality requirements).


But it has a whiff of what used to be called political correctness so we must RAGE and reject it entirely.


Regulating or scrutinizing a rapidly/exponentially evolving technology for ethical concerns can be challenging since arguments or regulations may quickly become outdated... Many of the AI ethics studies are likely becoming non-issues in a few months...


Good. Less brainwashing and bias in the products.


lol. You can’t write sci-fi as good as reality.


"AI fires AI ethics team. New AI CEO demands budget increases for compute and data gathering"


Edward Diego works at Microsoft confirmed.


can they pls work on ai for fully automated luxury space communism? would be cool


This seems like a good place to remind people of this: "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

https://dl.acm.org/doi/10.1145/3442188.3445922


Sometimes people link this as an example of the useless research produced by these AI ethics people, while others seem to take it seriously.

I can't tell if your comment is a defence of the layoffs or an objection to them.


please dont link random pdfs without some discussion as to what it is. i mean you can but my guess is it lowers willingness to interact and is not goos for communication. i can differentiate your post from spam.


I didn't think a link to an acm.org paper would come off as "random", but I updated it to give the title and link to the article landing page.


*can't differentiate.


you're right - I use android for firefox and either the autocorrect doesn't work at all on HN or it corrects everything the wrong way. Too late to edit now. I wrote "cant" and it autocorrected to "can" instead of inserting an apostrophe. Firefox for android hates this site.

anyways - previous he had a throwaway line like "we need to be thinking about this!" and a link to the pdf. It looked just like spam.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: