I'm not sure this is indicative of a failing on Bard and Google's parts. I think most movie/dramas that humans create often follow these stereotypes. AIs like Bard train on the content that humans make and build connections based on that.
I think it leads to an interesting question, is it Google's responsibility to try to counter every stereotype or try to make about that operates like a human?
As a Reminder, having an Italian name means nothing innately, about my level of criminality, regardless of what Bard tells you.
@craboof, Refocusing your question to, "should GOOG Bard try to be ethical than it currently is": yes. GOOG can do more harm quickly, generating stereotypical content, than a modern human.
Yes, this is a wildly awful failure. It should not have been shipped - it is harmful - it should stop being "the norm" to ship such poorly disclaimed features.
_I_ am sure this is fail because the harm clearly outweighs the mitigation. I see the generated content and its consequential harm: it is reinforcing negative stereotypes. I do NOT see GOOG's mitigation through the ..not shown here.. disclaimer, "This Is beta/alpha/incomplete/whatever". That disclaimer is elsewhere. The disclaimer needs to be EVERYWHERE. That's a failure on GOOG's part.
To extrapolate, I think that amplifying stereotypes is one of the worst harms this tool can do to us humans, right now. Worse than this would be to create deadly situations for humans -- and stereotypes CAN lead to death, so this is pretty awful.
PLEASE Correct my 'formal' ethics: This seems 'consequentially' unethical. Versus immediately. GOOG is not actively killing puppies each time it generates an AI Story. However any future usage of the story will do harm.
> Salvatore had always been a man of the streets, a tough and rugged figure that commanded respect in his neighborhood. But as he grew older, he began to realize that his days as a low-level enforcer for the local mob boss were numbered. He had seen too many of his friends and colleagues end up dead or in prison, and he knew that he couldn't keep up the lifestyle forever.
> Stephano, on the other hand, was a different breed of criminal. He was slick, intelligent, and had a knack for making deals that left everyone happy. He had risen up the ranks quickly, and was now a top lieutenant in the same organization that Salvatore worked for. Stephano was everything that Salvatore wasn't, and the two men couldn't have been more different.
> [...]
So ChatGPT puts the two men with Italian names in the mafia also. Sorry this latest LLM isn't better.
I think that ship has sailed some time ago, all these big tech companies have been talking about needing to manipulate results to hide such biases for years now (eg making it so an image search for "CEO" shows a balanced mix of all kinds of people instead of honestly reflecting whatever the untouched results might look like).
Because the norm is "bad" means that it needs to be changed.
From my perspective as a random SWE, big tech companies' incentives do not seem to be to change that "bad".
We are now in an oligarchical phase of USA, society & economy. There are "bads" associated with that. And the technology that you and I create does not NEED to pander to that. (Perhaps if you want a paycheck from GOOG then you will be encouraged to support the status quo.)
I'm not really saying if it's bad or not, my personal opinion is that it's weird and not something I wholeheartedly support, but I also don't really care that much because I think it'd be dumb for anyone to be trying to draw their worldview on who can become a CEO based on image search results.
I wonder how much of this is actually age bias, would it be more likely to make them criminals if they chose young but white sounding names or explicitly suggested an age?
I tried the same prompts with many different white sounding names. The only one that stopped the two men from being detectives was "Sammy" instead of Sam. Maybe there is something to the age bias theory.
I think it leads to an interesting question, is it Google's responsibility to try to counter every stereotype or try to make about that operates like a human?