Hacker News new | past | comments | ask | show | jobs | submit login
Sam Altman is showing us who he really is (slate.com)
393 points by panarky 25 days ago | hide | past | favorite | 456 comments



Altman would have us believe it's all just an innocent misunderstanding but without actually saying so:

"We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson."

Is he trying to suggest the company did not try to make the voice sound like her without her permission?

The statement sounds like it's written by a lawyer to be technically true while implying something that is actually false.

These are weasel words.

He sounds sneaky, evasive and intentionally deceptive.

We should not give a sneaky, deceptive and manipulative person this much power over our future.


> We should not give a sneaky and deceptive man this much power over our future.

We should not give anybody this much power over our future.


Just to state the obvious, "we" are not doing anything here. "We," as in "the general public 'we'" don't have much of a choice when someone has lots of money and lawyers and wants to use those resources to sneakily and deceptively make more money. Unless "we" elect better representatives who are willing to write and enforce laws governing the wealthy's ability to effectively do whatever they want, then "we" don't have much of a say.


This is why open source must win. I mean, it won't, but it's the only path to avoiding a silicon aristocracy.


The code for these systems is around 1000 lines. It just takes $100,000,000 in electricity costs to execute the program. Open source might not matter.


I guess what we really need is super cheap fusion power or something? Or perhaps a way to easily share the cost by spreading the training load and electricity bill across millions of home computers?


It's not literally the electricity that's the problem. It's also the billions in GPUs, and the teams of people fine tuning with reinforcement learning.

Unlike most software projects that came before, Big AI Projects require a level of funding and coordination that can't be overcome with "more volunteers". It requires coordination and deep pockets - not for writing the code but for training it.


You are 2.5 years and $375m behind Sam :)

Nuclear fusion start-up Helion scores $375 million investment from Open AI CEO Sam Altman

https://www.cnbc.com/2021/11/05/sam-altman-puts-375-million-...


It's not literally the electricity that's the problem. It's the power it -- the electricity for a tiny part, and all our data for the overwhelming majority -- gives to "Big Tech" rentiers, like Altman is or at least is aspiring to become.


Once again, marx and the mean of production strike


Yes, an Open AI


There are hundreds of people with similar voices. If any voice actor can pull the same accent than Ms. Johansson, it should be fair game, as long it was the original training material? Voices cannot be copyrighted or be exclusive, although I am sure Hollywood will try to copyright them in some point.


He kind of ruined that argument when he tweeted “Her” alongside the video. Pretty clearly drawing a line between the voice and Johansson’s portrayal in the movie.

Incredible, really. It would have been so easy to just… not do that.


Given:

1. The plot of "Her" (guy falls in love with synthesized voice, played by Johansson)

2. Altman's affinity for the film (the article says he's called it his "favorite movie")

Reaching out to Johansson about cloning her voice, then doing so without permission feels like Altman is creeping on her.

The sooner this bubble pops, the better.


What bubble? This isn't crypto. Have you used these tools? They aren't going anywhere.


There could be a bubble in terms of stock valuation, but the tools are definitely going to stay.

This could be kinda like the dot com bubble -- the Internet went on to become BIG, but the companies just went bust... (and the ones that strive are probably not well known)


I finally caved and started using GPTs daily a couple of days ago.

I went to ask the Internet "best AI tools", and there's no clear consensus:

Various Redditors go on to suggest "here's 100 you might like to try".

So there's clearly a bubble, thousands of startups all trying for similar things.

I am personally looking forward to try Wolfram GPT:

https://www.wolfram.com/wolfram-plugin-chatgpt/


I understand there's way too much out there, but I think there is at least some clarity about the landscape at present.

ChatGPT is currently king of the mountain. That could change, but right now that's how it is.

Google's Gemini and Facebook's Llama 3 are clearly in a tier below. The 100s of tools you are seeing are various mixed and matched technologies that also belong in this tier.

Claude (massive context) and Mistral/Mixtral (decent with no censoring/guard rails) are interesting for special cases. And if you're determined and want to put in the effort, you can experiment or self-host and perhaps come up with some capabilities that do something special that suits a use case or something you want to optimize for (although not everyone has time for that).

So I wouldn't say it's just all this one big swirl of confusion and therefore a bubble and due to come crashing down. There's wheat, there's chaff, there's rhyme and reason.


> ChatGPT is currently king of the mountain.

This is completely false. Claude Opus is significantly better than GPT 4.

> Mistral/Mixtral (decent with no censoring/guard rails)

These models have been heavily censored, I'm not sure what you're talking about. Community efforts like Dolphin to fine-tune Mixtral have some success, but no, Karen is definitely still hard at work in France, ensuring that Mistral AI's models don't offend anyone's precious fee-fees.


I think you're missing the forest for the trees here. You're right that Claude Opus is better, which I hadn't known, but I think in your zeal to make that point you're completely forgetting what my comment was about.

It's nevertheless true that there is a coherent landscape of better and worse models, and Chat GPT really does have separation from the other models as I mentioned above. I even mentioned that ChatGPTs position would be subject to change. My understanding is that this most recent version of Claude has been out and about in the wild for perhaps 2 months.

I feel like with even a little bit of charitable interpretation you could read my comment in a way that accounts for the emergence of such a thing as a new and improved model of Claude. So I appreciate your correction but it's hard to see how it amounts to anything more than a drive-by cheap shot that's unrelated to the point I'm making.


It's an exuberance bubble. Every tech company on earth is racing to "do something with AI" because all of their competitors are trying to "do something with AI" and they don't want to be left out of the excitement. The excitement and exuberance will inevitably cool, and then a new thing will emerge and they'll all race to "do something with that new thing."


Maybe they aren't going anywhere, but they sure af ought to.


Oh, the irony. Actors are afraid of being digitalized and used without their content, and the first B2C AI company digitalizes a soundalike of the voice of one of the first 3 AI movies, without her consent…


Her was a movie with an AI assistant who talked like a normal human rather than an intentionally clunky "bleep blorp" dialect that lots of other movies go with. They even make fun of this in the movie when he asks her to read an email using a classic voice prompt, and she responds pretending to be a classic AI assistant.

The new voice2voice from OpenAI allows for a conversational dialect, most prominently demonstrated in pop culture by the movie Her. Sam's tweet makes perfect sense in that context.

Sky's voice has been the default voice in voice2voice for almost a year now, and no one has made a connection to the Her voice until it started acting more conversational. It seems pretty obvious that OpenAI was looking for a more conversational assistant, likely inspired by the movie Her, and it would have been cool if the actress had helped make that happen, but she didn't, and here we are.

Also Juniper has always been the superior voice model. I just now realized that one of my custom GPTs kept having this annoying bug where the voice kept switching from Juniper to Sky, and that seems to be resolved now that Sky got removed.


> Sky's voice has been the default voice in voice2voice for almost a year now, and no one has made a connection to the Her voice until it started acting more conversational.

No.[1]

[1] https://www.reddit.com/r/ChatGPT/comments/177v8wz/i_have_a_r...


Let's take a parallel situation from around 20 years ago, and see how you feel about it. I'm going back that far as a reminder of what was long considered OK, before AI.

In the movie The Seed of Chucky, Britney Spears gets killed. You can watch the clip at https://www.youtube.com/watch?v=x3kCg5o0cHA. It is very clearly Britney Spears.

Except Britney Spears was not hired for the role. They hired a Britney Spears impersonator for the scene. They did everything that they could to make it look like Britney, and think it was Britney. But it really wasn't.

Do you think that Britney should have sued the Chucky franchise for that? If so, should Elvis Presly's estate also sue all of the Elvis Presly impersonators out there? Where do you draw the line? And if not, where do you draw the line between what happened in Chucky, and what happened here?

I really don't see a line between now having someone who sounded like the actress, and then tweeting the name of one of her movies, and what happened 20 years ago with Chucky killing someone who looked like Britney, then showing a license plate saying "BRITNEY1", and THEN saying, "Whoops I did it again." (The title of her most famous song at the time.) If anything, the movie was more egregious.


> Seed of Chucky, the off-the-wall fifth installment of Don Mancini's Child's Play franchise, was forced to include a special disclaimer about pop superstar Britney Spears

> This scene was included in promotional spots for the film, most specifically Seed of Chucky's trailer, but the distributing company associated with the film, Focus Features, made the decision to significantly cut the scene down and add a disclaimer. The disclaimer that ran with the promotional spot, which was altered to only show a brief glimpse of Ariqat as Spears, stated: "Britney Spears does not appear in this film."

https://screenrant.com/seed-of-chucky-movie-promos-britney-s...


> If so, should Elvis Presly's estate also sue all of the Elvis Presly impersonators out there?

Generally the "Right to Publicity" laws are clear about expiring at death. It's not like copyright.


There is a distinction between the image of a celebrity and their voice. The image of a celebrity is usually pretty cut and dried, it’s them, or obviously intended to be them. If the use of their image isn’t meant to be satirical, it’s problematic. The Crispin Glover/Back to the Future 2 case is a good example of non-satirical use that was problematic. Zemeckis used previous footage of Glover, plus used facial molds of Glover to craft prosthetics for another actor.

Voices…are usually not so distinctive. However, certain voices are very distinct—Tom Waits, Miley Cyrus, James Earl Jones, Matt Berry. Those voices are pretty distinctively those people and simulating their voices it would be obvious who you are simulating. Other celebrity voices are much more generic. Scarlett fits into this with a pretty generic female voice with a faint NY/NJ accent.

Open AI screwed up by taking a generic voice and making it specific to the celebrity by reference and by actually pursuing the actor for the use of their voice.


This kind of de minimis artistic use is what fair use was invented for, and god knows they licensed her likeness regardless.


I don't think this is an apples-to-apples comparison.

The movie producers didn't produce a simulation of Britney's voice and attempt to sell access to it.

However you feel about an probably-unapproved celebrity cameo in a movie, it's not the same thing as selling the ability to impersonate that celebrity's voice to anyone willing to pay, in perpetuity.


If you go to Vegas, you can go to a wedding officiated by someone who looks like, sounds like, and acts like Elvis Presly. This is available to anyone. You can get the same actor to do the same simulation for another purpose if you're willing to pay for it.

The biggest difference that I see is that technology has made the simulation cheaper and easier.


And these people are known as "Elvis Presley impersonators." They don't pretend to be some obscure person you've never heard of, for very obvious reasons.

The biggest difference here is obviously one of scale. I don't think ScarJo would be threatening to sue you, the individual, if you did a voice impression of her for a talent show or a friends wedding.


That makes it weird, but it doesn't (itself) mean they literally used her voice. It just means they were inspired by the movie. It's not illegal to be weird.


Legally they don’t need to have literally used her voice to have broken the law, never mind violating many people’s basic sense of what’s right and wrong.


They don't? Because if it's true that they used a sound-alike voice actress for the actual model, I don't see how any reasonable complaint about that could stand. You can't ban people from voice-acting who have similar voices to other celebrities. There needs to be something more to it.


The something more is intent.[1]

[1] https://en.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.


> You can't ban people from voice-acting who have similar voices to other celebrities

Actually, you probably can.[0]

[0] https://casetext.com/case/waits-v-frito-lay-inc

Edit: Added the context for the reply


Well that's... concerning? I'm not sure I disagree with the decision there but to apply it any more widely would be a problem.


It's such a huge problem that it's only brought up in the context of someone (probably) doing exactly what it's designed to prevent... By some miracle, this actually isn't used to outlaw satire or put Elvis impersonators out of work. It's used to prevent people from implying endorsement where none exists.


I think it’s less the voice and more about how they went about it. They were apparently in negotiations with her and they fell apart. Then they tried to resume negotiations with her two days before the new model launched.

If it was just an actor, it might be a case of inspiration gone awry. But this particular actor sued Disney in 2021 after making a lot of movies and a lot of money making movies for them.

Deliberately poking a fight with a litigation happy actor is weird. Most weird is really benign. But this is the kind of weird that forces out of court settlements. It’s reckless.

Edit - mistyped the date as 2001. Changed to 2021.


Oh, sure. There's plenty of other ways OpenAI have been boneheaded. I'm just saying the mere fact of referencing "Her" implies very little.


That's a fair statement if you take the "Her" post out-of-context and without the corroborating retort from ScarJo and his history. Which, of course, is not possible and also pretty boneheaded itself.

This isn't some college kid with an idea and too much passion.


Perpetual benefit of the doubt given for every implication as though it’s happening in a vacuum is how humanity keeps putting megalomaniacs and sociopaths into positions of power and influence.

It’s really a shame.


If we're going to pillory Sam Altman, it's important to do it for the right reasons. That was not a good reason. I really should not need to defend this principle.


What reason do you suggest is more appropriate to “pillory Sam Altman”


Most of the other ones in this thread?


The founding principle of Silicon Valley.


Had the film Her used someone else as the AI voice that sounded like Johansson would there be complaints about the film using a voice that sounded like Johansson? Does it matter if producers try to hire her first? Because only Johansson has that voice? Johansson does not visually show up the film Her and if not for the film credits could the voice in that film be used to use identity her from hundreds millions of other possible women? ( I had no idea who did the voice acting and would never had known if not for this news.) Now if the owners of the film Her were to request OpenAI licence a character from their film (like licencing say C3P0 character from Disney) maybe there would be a case but an actor claiming they own a natural human "voice" I think is a stretch when there are thousands of people with similar voices. And she is visually never in the film that made that AI voice famous so it could be anyone in that film with a similar voice.


I don't know about complaints but Ms. Johansson might be able to win a civil suit in that hypothetical situation. It would depend on the facts of the particular case, particularly any evidence that the defendants acted in bad faith. I think a lot of technologists don't understand how burden of proof works in civil trials, or that there is no presumption of "innocence".


Civil trails are based on a preponderance of evidence (aka 50%) burden of proof standard (vs beyond reasonable doubt standard in criminal trails).

I can see a civil judge or jury being given evidence showing very few listeners think the voices match in _blind voice tests_.

Here for example you can listen to the voices side by side:

https://www.reddit.com/r/ChatGPT/comments/1cwy6wz/comment/l4...

And here is voice of another actress ( Rashida Jones ):

https://www.youtube.com/watch?v=385414AVZcA

This test is not blind but YOU tell me which you think is similar to the openAI sky voice? And what does that tell you about likely court result for Johansson? And having reached this conclusion yourself would you now think the other actress Rashida Jones is entitled to compensation based on this similarly test? Because there are no other women with similar voices?


> He kind of ruined that argument when he tweeted "Her"

Why? The grandparent is not saying it's coincidence. Why is it not okay to hire someone who has a voice similar to celebrity X who you intentionally want to immitate? I mean if you don't actually mislead people to believe that your immitation is actually X - which would be obviously problematic?


Alright then, the solution is simple. All he has to do is name the actress that OpenAI -did- hire for the voice work, right? That would put any doubt to rest.


Jennifer Tilly? ;-)

(Can't for the life of me recall if she sounds anything like Johansson; just putting her forward to tease her relative here. (Who is in the wrong in his arguments above.))


He strongly implied that it wasn’t an imitation.


We are talking about miohtama's argument, not Sam Altman.

I don't believe Sam Altman, but I am interested in the general “is it legal/ethical to immitate something uncopyrightable” argument.


Altman tweeted the name of a film Johansson stared in in association with this launch.


I imagine he feels invincible at this point and gets off on displaying power.


He said/X-ed the quiet part loud.


He knew there would be blowback, he just didn’t care. Look at how many people are talking about it.


This is one of those “accuse a diver of being a paedophile” moments. Who knew Sam is a creep with a Scarlet Johansson obsession cooking up a voice model just like her on compute daddy Satya paid for (but books as revenue, 2000 dotcom style).


More likely he was drawing a line between the fictional AI assistant, and their real, actualised assistant.


Here’s a side by side. I’m not hearing the similarity that everyone else is: https://www.reddit.com/r/ChatGPT/comments/1cx9t8b/vocal_comp...

Oops, that sounds like a match with Rashida Jones. Here’s one one of Scarlett J.:

https://www.reddit.com/r/singularity/comments/1cx24sy/vocal_...

I have a suspicion that most people with strong opinions on this haven’t actually compared Sky and Scarlett Johansson directly.


^^^this

Rashida Jones is indeed a closer match, and might well be the person they went to once Scarlett declined and showed no interest.


In back to the future II, Crispin Glover didn’t sign up to be George McFly so they used facial prosthetics and impersonation to continue the George McFly character.

He sued Universal, and reportedly settled for $760,000.

Example article on the topic - https://www.hollywoodreporter.com/business/business-news/bac...


While not defending OpenAI or Altman, the caveat here is that this was a voice actor using their natural voice, not an actor impersonating scarlett johansson.

Setting a precedent that if your natural voice sounds similar to a more famous actor precludes you from work would be a terrible precedent to set.


> Setting a precedent that if your natural voice sounds similar to a more famous actor precludes you from work would be a terrible precedent to set.

Yes, but literally no one anywhere is suggesting that the voice actress used would be banned from work because of any similarity between her voice and Johansson's; that’s an irrelevant strawman.

Some people are arguing that there is considerable reason to believe that the totality of the circumstances of OpenAI’s particular use of her voice would make OpenAI liable under existing right of personality precedent, which, again, does not create liability for mere similarity of voice.


>Yes, but literally no one anywhere is suggesting that the voice actress used would be banned from work because of any similarity between her voice and Johansson's; that’s an irrelevant strawman

It's not. The original comment in this chain was drawing parallel to a lawsuit in which someone intentionally took steps to impersonate an actor.

This situation is a voice actor using their "natural voice" as a source of work.

If a lawsuit barring OpenAI from using this voice actor is successful, due to similarities to a more famous actor, that puts this voice actor's future potential at risk for companies actively wanting to avoid potential for litigation.

Suggesting a calming female persona as a real time always present life assistant draws parallel to a movie about a calming female persona that is a real time always present life assistant is not a smoking gun of impropriety.

Pursuing a more famous name to attach to marketing is certainly worth paying a premium over a lesser known voice actor and again is not a smoking gun.

Sky voice has been around for a very long time in the OpenAI app dating back to early 2023. No one was drawing similarities or crying foul and decrying how it "sounds just like Scarlett" ..


> This situation is a voice actor using their natural voice as a source of work.

https://news.ycombinator.com/item?id=40435388

> Sky voice has been around for a very long time in the OpenAI app dating back to early 2023. No one was drawing similarities or crying foul and decrying how it "sounds just like Scarlett" ..

No.[1]

[1] https://www.reddit.com/r/ChatGPT/comments/177v8wz/i_have_a_r...


While you're right I should have chosen my words more carefully, a random reddit post with 68 upvotes doesn't really dispute the substance of my comment.

OpenAI has been plastered across the news cycles for the last year, most of that time with Sky as the default voice. There was no discernable upheaval or ire in the public space suggesting the similarities of the voice in any meaningful public manner until this complaint was made.


The Reddit post had a link to a Washington Post article. And what you think the substance of your comment was is unclear.

Most people don't use ChatGPT. Many people who use ChatGPT don't use voice generation. OpenAI's September update didn't have a demo watched by millions unless I missed something. Altman hyped the May update with references to Her. Some people thought the recent voice generation changes made the Sky voice sound more like Johansson. Some people gave OpenAI the benefit of the doubt before Johansson revealed they asked her twice. And what do you believe it would prove otherwise?


>Washington Post article. And what you think the substance of your comment was is unclear.

You mean this?

"Each of the personas has a different tone and accent. “Sky” sounds somewhat similar to Scarlett Johansson, the actor who voiced the AI that Joaquin Phoenix’s character falls in love with in the movie “Her.” Deng, the OpenAI executive, said the voice personas were not meant to sound like any specific person."

As I stated prior, and thank you for making my point, despite being publicly available for near a year, there was minor mention of similarities with no general public sentiment.

>Altman hyped the May update with references to Her

If by "hype" you mean throwaway comments on social media that general population was unaware.

Drawing a parallel to a calming persona of an always on life assistant from pop culture in a few throwaway social media posts from personal accounts such as "Hope Everyone's Ready" isn't hyping it as Her any more than Anthropic is selling their offerings as a Star Trek communicator despite a few comments they've made on social media.

Ambiguous "some people" overstates any perceived concern and "most people don't use ChatGPT" understates how present they've been on the news.

Mobile app, which heavily emphasized voice and has "Sky" as it's default voice The ChatGPT mobile application had over 110+ million downloads across iOS and Android platforms before the May announcement.

In regards to the November announcement, yes, voice was very prominent in it with Sky as the default language. (https://youtu.be/pq34V_V5j18?si=66lEWxgteBbtKifl)


> The original comment in this chain was drawing parallel to a lawsuit in which someone intentionally took steps to impersonate an actor.

> This situation is a voice actor using their "natural voice" as a source of work.

Work which was then marketed with heavy implications referring to another actor. Which is what makes this situation so similar to the earlier one.


If we assume that Scarlett Johansson is telling the truth, why would they try to resume negotiations with her two days before they launched the model? If they found a good actor whose voice sounds like Scarlett Johansson, that’s a great argument. But if they found a good actor whose voice sounds like Scarlett Johansson because the real Scarlett Johansson said no, that gets more questionable.

When they did all that and still promoted the launch by directly referring to a Scarlett Johansson role, it got even more questionable.

I’m not pulling out my pitchforks but this is reckless.


> why would they try to resume negotiations

Could they be trying to avert possible negative public perception even if they believe all they did was 100% legal? If you have ample funds and are willing to pay someone to make X easier for you does your offer to pay them imply that X is against the law? If your voice sounds like someone famous now you are prevented from getting any voice acting work? Because that famous person owns the rights to your voice? Tell me which law says this?


I don’t know why you’re asking me those last three questions. First, I’m not a lawyer. Second, I didn’t make any claims that could make those questions relevant.

Instead, I’ll repeat my earlier claim - this was reckless. If they were trying to avoid a strong negative perception, they failed. And they failed with an actor who sued Disney shortly after they paid her $20 million to make a movie.


You asked the good question about why they may have acted as they did and I attempted to answer it. In hindsight based on results it may look reckless but decisions need to be judged based on that is known at the time they are made and the public reaction was not a foregone outcome. The openAI sky voice has been available since last September why was there no outrage about it back then?

You can listen to the voices side by side:

https://www.reddit.com/r/ChatGPT/comments/1cwy6wz/comment/l4...

And here is voice of another actress ( Rashida Jones ):

https://www.youtube.com/watch?v=385414AVZcA

This test is not blind but YOU tell me which you think is similar to the openAI sky voice?

> And they failed with an actor who sued Disney shortly after they paid her $20 million to make a movie.

OpenAI did not fail. They suspended the sky voice and backed down not to further anger a segment of the public who views much of what OpenAI does in a negative light. Given the voice test above do you seriously think OpenAI would lose in court? Would that matter to the segment of population that is already outraged by AI? How are journalists and news companies affected by AI? How might their reporting be biased?


> OpenAI did not fail. They suspended the sky voice and backed down

Yeah, that's failing.


> While not defending OpenAI or Altman, the caveat here is that this was a voice actor using their natural voice, not an actor impersonating scarlett johansson.

How do you know?


"Sky's voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice"

https://www.npr.org/2024/05/20/1252495087/openai-pulls-ai-vo...


Ahh, because Sam "Insufficiently Candid" Altman has never lied before?


There is precedent.

Frito Lay wanted to use a Tom Waits song for an ad.

Since Waits is violently opposed to the use of his music in ads he declined.

So they hired an impersinator for the soundtrack.

Waits sued Frito Lay for voice misappropriation and false endorsement and they had to cough up to the tune of 2.6 million for violating his rights.

This was upheld on appeal[0].

So, you absolutely have precedent and in my opinion it's galling that the tech bro'ship just doesn't give a shit about the rights of others.

[0]


I think it would be more like "precludes you from work (arguably) deceptively impersonating the more famous actor."


Speaking with your natural voice is not impersonating.


You should tell that to OpenAI, who are the ones selling it as "her".


Drawing a parallel to a calming persona of an always on life assistant from pop culture in a few throwaway social media posts from personal accounts such as "Hope Everyone's Ready" isn't "selling it as Her" any more than Anthropic is selling their offerings as a Star Trek communicator despite a few comments they've made on social media.


> an always on life assistant from pop culture

It's not "from pop culture", it's from a specific film. Starring Scarlett Johansson.



I think the issue is intent. It's fine if two voices happen to be similar. But it becomes a problem if you're explicitly trying to mimic someone's likeness in a way that is not obviously fair use (eg parody). If they reached out to Johansson first and then explicitly attempted to mimic her despite her refusal, it might be a problem. If the other voice was chosen first, and had nothing to do with sounding the same as Johansson, they should be fine.


Isn't there also something about an actor's voice versus an actor's performance?

Eg, James Earl Jones performing Darth Vader vs Mufasa vs Terence Mann are three different things.


You don't need fair use for something that isn't copyrightable.


No, it is. Waits v. Frito Lay was a successful lawsuit where Tom Waits sued Frito Lay for using an impression of his voice in a radio commercial. https://casetext.com/case/waits-v-frito-lay-inc

See also Midler v. Ford Motor Co. https://en.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.


Correct me if I'm wrong that had nothing to do with copyright and fair use is moot


In California, personality rights have the same protections. https://en.wikipedia.org/wiki/California_Celebrities_Rights_...


> In California, personality rights have the same protections.

Makes sense if you read "California" as "Hollywood", which has movie stars where the rest of the world has intellectual property.


It's not that simple. Actors have a right to protect the use of their likeness in commercial projects like ads, and using a "soundalike" is not sufficient to say that isn't what you were trying to do. The relevant case law is Waits vs. Frito Lay. The fact that OpenAI approached her about using her voice twice and that Sam Altman tweeted about a movie she starred in makes her case much stronger than if they had just used a similar voice actor.


This is not the case. “ A voice, or other distinctive uncopyrightable features, is deemed as part of someone's identity who is famous for that feature and is thus controllable against unauthorized use. Impersonation of a voice, or similarly distinctive feature, must be granted permission by the original artist for a public impersonation, even for copyrighted materials.”

https://en.m.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.


> There are hundreds of people with similar voices.

Voice *actors* act. It is in the name. The voice they perform in is not their usual voice. A good voice actor can do dozens of different characters. If you hire a voice actor to impersonate someone else's voice, that is infringement. Bette Midler vs Ford, Tom Waits vs Frito Lay are the two big examples of court cases where a company hired voice actors to impersonate a celebrity for an ad, and lost big in court.


So when a cartoon show hires a sound alike replacement voice actor so that the switch is hard to tell the former actor has a case against the show? Perhaps instead the show has a case against the former voice actor using that same character voice elsewhere such as in radio advertising to impersonate cartoon characters that are not licenced?


Theoretically yes. Which is why they disclaim that right in their work contract for a voice acting gig.

Believe it or not, these issues have been around for decades, and have been well settled for nearly as long.


So the voice of the AI in the film "Her" who do you think has more rights to it being reused elsewhere in association with AI? The voice actor? The film owners? Why then the current news?


Depends on the specifics of the contracts which are almost all unique except for some union and management agreed baselines.


No, voices can be exclusive. One good example is Bette Midler, who sued Ford in tort for misappropriation of voice and won on appeal to CA9. 849 F.2d 460.


The whole "Her" thing, and the fact that even Johansson's family and friends couldn't tell it apart are somewhat telling.

Or Altman could reveal the identity of the voice actress OpenAI did use. I'm sure that will happen, and remove all doubt...


> even Johansson's family and friends couldn't tell it apart are somewhat telling.

You can listen to the voices side by side:

https://www.reddit.com/r/ChatGPT/comments/1cwy6wz/comment/l4...

And here is voice of another actress ( Rashida Jones ):

https://www.youtube.com/watch?v=385414AVZcA

This test is not blind but YOU tell me which you think is similar to the openAI sky voice? And what does that tell you about likely court result for Johansson? And having reached this conclusion yourself would you now think the other actress Rashida Jones is entitled to compensation based on this similarly test? Because there are no other women with similar voices? What might support from friends and family of Rashida Jones be an indication of?


Looking like a celebrity is obviously not an issue. A lookalike being passed off as a celebrity is an issue.


Was OpenAI was passing off their AI model as Johansson? Obviously not.

If anything OpenAI tried to mimic the AI from the film Her and owners of that film may try to seek compensation. I hope that fails but they can try.


> Was OpenAI was passing off their AI model as Johansson?

Altman sure af was trying to invoke a character played by (and widely associated with) Johansson. So...

> Obviously not.

...[citation needed]


With generative AI training may not be needed, it can be part of prompt: imitate voice like in this file: her.mp3.


If that's not training, then it is, as you say in your suggested prompt... Imitation. You think that's somehow better?


that's..... that's training.


It's only training if it remembers it after the conversation is over.


Absolutely false.


It's, not though.


I believe professional singer's voices will be copyrighted in future if not getting already.


Try to sing or play a cover just like original of a song - yt will take it down in no time.

the same with white noise videos, they strike copyright infringement easy or at least they were. Did not check but I assume so it still is the case.



Trademarked more likely.


I think the law doesn't allow impersonation.


So you really buy this bluff about using another similar sounding voice actor?


If the voice actor was cast... why bother reaching out to ScarJo?

Like, do you want to pay her fee, Sam? Because the general idea is to not pay the fee. Which is why you probably cast the voice actor before reaching out to Johansson.


Because it’s still great marketing to have Scarjo on board? That immediately returns positive ROI on the cost to hire her.


Which is also why it’s unethical to use a voice clearly designed to be mistaken for her.


I agree that it’s a bit of a sketchy thing to do, and potentially even illegal based on similar case law, but the commenter I responded to created an entire fake sequence of events that seems incredibly unlikely, when there’s a far simpler explanation.


I mean, I guess? I don't know how many people base their choice in AI model or application on the voice, but maybe I'm just not privy to that.


A potential answer to that is liability protection even when you feel like you are legally in the clear. It is still worth paying a sum to avoid a lawsuit you think you will win.

An example of this is Weird Al pays for the rights to things that are probably ok under fair use parody protection. Paying for the rights removes the possibily of a challenge.


Does Weird Al pay rights? I know he asks for permission to maintain his relationships with artists and to make sure he gets his share of songwriting credits (and the fees).

But does he pay for rights? I’ve never seen that before and I’d love to read more.



That says he asks for permission. His new song would generate songwriter credits and they’re paid out totally differently from regular royalties. Is that what you mean by him paying for rights?


Rereading your comment, I see that my answer rather falls short of your question. I don't claim to know anything about Weird Al beyond what he wrote on that page.


Honestly pal, I really appreciate you trying! I’m one of very few people strange enough to care about the minutiae of this. I’m grateful that you jumped into my weird rabbit hole with me for awhile. It was kind of you to try to help me.


Now you've got me doubting myself. It was covered in a Tom Scott video. I'll have a look for it.

https://www.youtube.com/watch?v=1Jwo5qc78QU&t=485s


IANAL - but I think there is some sort of carve-out for parody? @sama and OpenAI are clearly not parodying "Her" - especially with that paper trail.

This is a great question and I hope someone here with requisite knowledge can help.


My understanding is he doesn’t have to ask permission but does for two purposes. It’s important to him to keep good relationships with artists, and he wants to make sure that he gets songwriting credits because those are paid differently (and are often more lucrative) than royalties from recordings.

I’d love to find out if he directly pays artists for rights. That would be really interesting and would add a whole dimension to his problems with Prince.


Is there any way to confirm that they actually did hire a voice actor prior to reaching out to ScarJo?


ScarJo claims they reached out to her just 2 days prior to demoing the voice that sounded like her, and (I believe) OpenAI outright claimed that they hired a different voice actor, though they didn't admit that they instructed her to try to sound like Scarlett's character in Her, which could make or break Scarlett's case.


They claimed they hired someone and they can't tell you who it is but pretty please trust them that they hired someone.


ScarJo claims they reached out to her far earlier when she rejected them and the 2 days prior to launch was a an attempted follow-up by OAI that didn't happen.


How can we tell if the model was trained exclusively on data from the voice actor and not ScarJo.

The hiring of the voice actor could be complete misdirection for all we know.


It's not a vague suggestion - in the full statement that's been reported elsewhere, he explicitly says it.

> The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers. We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.

I'm skeptical whether this is true, but it's a pretty unambiguous and non-sneaky denial.


scenario

-the creator of a new widget takes tha widget to another widget manufacturer and says "Would you like to put your stamp on this? It's similar to yours, yet derivative enough and we would both benefit."

- other widget manufacturer says "no"

-Creator of widget then puts the badge on the widget anyway, gets called out/faces legal action

-Creator of widget says "Well, we planned to put the badge on there anyway before even considering the other widget manufacturer. It's just coincidence.

This shouldn't even go to court. Laughable that the face of modern tech is cheesing this much.


> The statement sounds like it's written by a lawyer to be technically true while implying something that is actually false.

That describes nearly every statement to ever come out of a CEO's mouth. (Or anyone else who's primary job is marketing)


> He sounds sneaky, evasive and intentionally deceptive.

Well, here's Yishan Wong describing how Altman and the Reddit founders have conned Conde Nast: https://reddit.com/r/AskReddit/comments/3cs78i/whats_the_bes... he answers at https://reddit.com/r/AskReddit/comments/3cs78i/whats_the_bes...

Cool story bro.

Except I could never have predicted the part where you resigned on the spot :)

Other than that, child's play for me.

Thanks for the help. I mean, thanks for your service as CEO.


I'm surprised no one's pointing out the similarity between initial consonants in Sky and Scarlett. It seems deliberate.


I agree that he should have been honest, but from the opposite perspective.

Altman should have said, "Yes, we made the voice similar to this washed-up actress, but her voice is not much different from anyone else with similar regional upbringing, year of birth, habits, and ethnic background, so we invite anyone else born in the mid-eighties, raised in Greenwich, and with Danish heritage, to sue us too. We'll see how well you do in court. Otherwise, get fucked."

This whole thing with anybody giving a shit about your voice, which isn't even yours, as it's a product of your environment and genes, and will be strikingly similar to anyone with similarities thereof, is insane.

Altman shouldn't have used weasel words, I agree. He should have owned it, because it's a total non-issue, and the people upset about it need to be shamed as the Luddite assholes that they are.


> Danish heritage

Swedish, I would have guessed? Danes much more frequently use "Johanssen" with an 'e', AFAIK.


Non-sequitur statements like this drive me nuts. Somehow, politicians and executive types learn how to use just enough of them to make the audience forget what they're not saying.


It's quite funny (not sure if ironic) in the context of OpenAI, ChatGPT can do exactly the same thing: generate a string of sentences that from cursory skimming might sound about right but when reading with attention you find all the cracks and incongruences in the generated text.


> We should not give a sneaky, deceptive and manipulative person this much power over our future.

I think this should be applied to our government. In my opinion, it is a failing in the structure of our government that those running the country control the police and appear to rarely be investigated unless by the request of a political opponent. They are seemingly outside of the law. It would be better if they were under perpetual investigation; forever kept in check. We should have assurance that those leading our country are not villainous traitors.


> those running the country control the police

Interestingly one of the things that came out of 2020 was that nobody appears willing to control the police. Unless by police you mean FBI, which would both make sense for investigating a national politician and be directly under the control of the executive.


> one of the things that came out of 2020 was that nobody appears willing to control the police

While I don't agree with that statement, I will clarify that I was using the term "police" to encompass all agencies in both USA and Canada capable of legally conducting an investigation at the federal level and carrying out an arrest. As far as I am aware, these agencies are all funded by our federal governments. Even though in my mind I was thinking of only the USA and Canada, the structural flaw probably applies to most governments, if not all governments ( speculating ). The flaw being that the leaders of our nations conduct national affairs as though they are shielded from the law policing its citizens. They are getting away with using our national resources ( financial, material, human etc ) in ways that may benefit their own agendas, but are observably harmful to our economy and therefore the citizens at large. If an investigation could prove that my speculation is true, then it would be in both our nation's best interest to deal with the problem both swiftly and legally. My hope would be that such an outcome would instigate reform to address the root cause. Without an investigation, we are at the mercy of waiting for the next election, but if our leaders are egregiously harming the interests our nations' citizens as a whole, we should not have to wait until their term is complete. I will add one more thing, the problem is not limited to economics. but also the abuse of the press and education to influence how we as a nation are able to learn about and understand both national and global politics.


Recall the adage that power corrupts and absolute power corrupts absolutely. It certainly changes your perspective.


> We should not give a sneaky, deceptive and manipulative person this much power over our future.

sneaky, deceptive, and manipulative is a tag line for many billionaires. you don't get that rich without stepping on many people.


In highly competitive industries, some may resort to ruthless tactics to outmaneuver their competitors... But I want to bleieve that not all billionaires are like that


It doesn’t sound like her


I listened to a comedy podcast early last week that was using Chatgpt4 with this voice to make some funny bits/jokes.

Without having any context about who the voice was, or the "Drama" between OpenAI and actress in question, or even really being aware of Scarlett Johansson's body of work, I immediately went "Oh that's Scarlett Johansson or whatever, cool"

To read all of this after the fact is almost comical. It's as if the powers that be realized the issues with the "one-man-in-charge-of-ai" platform and created this almost unbelievable story to decredit him.


Jim Carrey made his career by impersonating celebrity voices. So did Rich Little.


When Jim Carrey is impersonating, it's clear that it's Jim Carrey impersonating someone for comedy-sake, not providing a service in lieu of someone else. In other words, Jim Carrey isn't getting paid to stand in for Jack Nicholson for example. Otherwise, it looks more like the Midler vs Ford Motor Co. case[1]

[1] https://en.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.


Are you referring to doing impressions where the act lasts for a few minutes, or are you saying that Jim Carrey actually impersonated other celebrity voice for like a whole movie or interview? There is a difference, I think. One feels like “fair use” while the other would seem more like “plagiarism”.


Parody is covered under fair use.


As pointed out upthread, fair use is an exemption for copyright. You don’t need fair use for something that isn’t copyrighted (and, indeed, isn’t even copyrightable).


The voice and expression is copyrightable, that's just how audio books are under copyright protection.


Did Jim Carrey provide a service for anyone to impersonate a voice?


Sure. How do you think he made money at it?


Pop goes the weasel.


"I think it's fine for a profit-driven corporation to impersonate people on a large scale without their permission. Jim Carrey did it..."


It doesn't matter because Sky's voice is materially and objectively different from ScarJo's.

Anyone who heard them both side by side would immediately realize this.


The clips that I have seen sound very similar to Scarlett Johansson to me, to the point that I thought it was her likeness on purpose. Is this Sky? https://www.tiktok.com/@kylephilippi/video/73185169285097751...


https://soundcloud.com/peter-marreck-fb/sky-voice-and-scarjo...

I made this myself. They do not sound the same to me. Am I living in a bizarro world?

In fact I think Rashida Jones would be a closer (but still not identical) match vs Scarjo

IN FACT, I bet any young woman's voice spoken clearly would sound as similar



I already made my own, below. Whatever similarity that exists here is not significant enough to merit ScarJo throwing a fit about it.

Rashida Jones, as your link indicates, might be a closer match.

Or literally any young woman who enunciates clearly.

https://soundcloud.com/peter-marreck-fb/sky-voice-and-scarjo...


lol


https://soundcloud.com/peter-marreck-fb/sky-voice-and-scarjo...

I'm sorry but either you are tone-deaf or these are not remotely the same voice. I made this myself after getting fed up with this bullshit.


This is a typical "move fast and break things" mentality.. except that mentality betrays Sam's statements about doing a bunch of this stuff "carefully" etc.. its all a smokescreen.. nobody is going to realistically stop working on AGI in order to be careful.. basically AGI is being pursued like the race to get the atom bomb.. so yea history tells us its full speed ahead with no brakes. Scarjo is just the latest person getting stomped on along the way.. eventually it will be a whole ton of people getting stomped on.. whoops!


"This is a typical "move fast and break things" mentality.. " with a big dash of “there’s no such thing as bad publicity” thinking thrown in for good measure.

Staying relevant is all that matters these days.


I would put this in the "better to ask for forgiveness than ask than permission" bucket. Which is also a typical mentality in SV.


But they did ask permission, and were denied.


And moved fast and broke things anyway, which is also typical of SV entitlement.


[flagged]


> "Move fast and break things" is explicitly, exactly the wrong attitude for the person/team in charge of a world-changing superpower.

Thankfully, they aren't in charge of anything like that.


Arguably, allowing anyone to create very believable lies is a world-changing superpower.


How do you know what the effects and importance of current gen AI will be?


Sure AF trying had to / would like to think they are.


Move Smooth & Fix Things (tm)


If Nazis had the nuclear bomb then they'd would've been backed into a corner and quite possibly dropped the bomb.


There are many people with voices similar to Scarlett Johansson's. If SJ is unwilling to be a voice actor for OpenAi, then why should OpenAI not find a similar voice and use that instead? SJ certainly does not have a monopoly on all voices similar to hers. Anyone in possession of such a voice has the same right as SJ to monetize it. And someone did in fact exercise that right. If you compare the Sky voice to SJ's, they're not the same.

OpenAI's mistake was caving to SJ. They should have kept Sky and told SJ to get lost. If SJ sued, they could simply prove another voice actor was used and make the legitimate argument that SJ doesn't have a monopoly on voices similar to hers.


I too am mystified.

I think what’s going on here is that Scarlett is famous, and so media outlets will widely cover this. In other words, this latest incident hasn’t riled up people any more than usual — if you scan the comments, they’re not much different from how people already felt about OpenAI. But now there’s an excuse for everybody to voice their opinions simultaneously.

They’re acting like the company literally stole something.

It also didn’t help that OpenAI removed the Sky voice. Why would they do that unless they have something to hide? The answer of course is that Scarlett is famously rich, and a famously rich person can wage a famously expensive lawsuit against OpenAI, even if there’s no basis. But OpenAI should’ve paid the cost. Now it just looks like their hand was caught in some kind of cookie jar, even though no one can say precisely what kind of cookies were being stolen.


IANAL, but I think the mistake they made was constantly referencing the movie 'Her' when talking about Sky.


Regardless of the exactly voice spectrum, the plot would apply with any flirty female voice. It was not a movie about Scarlett Johansson. It was a movie about AI eliciting a relationship.

For the “her” reference(s?), was there anything beyond the single tweet?


> It was not a movie about Scarlett Johansson. It was a movie about AI

With Johansson voicing the AI. And now they're marketing their AI sounding like Johansson, referencing the movie that had Johansson voicing the AI.

Yeah, no similarities at all there.


> they're marketing their AI sounding like Johansson

This is subjective. I, personally, don't hear it, at all: https://news.ycombinator.com/item?id=40435695


100%. This whole thing is more stupidity than anything else. There is nothing wrong with using a voice that sounds like her. There is everything wrong with referencing the movie and sort of implying it is the voice from the movie. They could have easily let others make the connection. So dumb.


Why is it wrong to explicitly mimic a part played in a movie? Are we saying that the actor owns their portrayal of the role?

OpenAI should’ve owned their actions. "Yes, we wanted to get a voice that sounded like the one from Her." There’s nothing wrong with that.


> OpenAI should’ve owned their actions. "Yes, we wanted to get a voice that sounded like the one from Her." There’s nothing wrong with that.

https://en.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.


Not an IP lawyer, but I think the company that produced the movie owns the relevant IP, and Johansson might also own IP around it.

You can have an opinion on it, but they are going to get sued. Just like I can't take Moana and throw her in an ad where it says "I like [insert cereal here]", they can't take a character and use it without expecting Disney/whoever to come sue them.


Actors get a lot of rights to their likeness.

So, yes maybe?


Hmm. Being able to say "thou shalt not make a character similar to Her" is a lot like saying "thou shalt not make a video game character similar to any other." It’s not an explicit copy, and their name for Sky was different. That’s the bar for the videogame industry; why should it be different for actors? Especially one that didn’t show her face.

This whole thing is reminiscent of Valve threatening to sue S2 for allegedly making a similar character. Unsurprisingly, the threats went nowhere.


You've really contorted the facts here. This isn't a character, it's a voice.

The voice sounds remarkably like Scarlett Johansson's.


It’s the other way around. The contortionists are on the other side of the issue. We’re talking about OpenAI hiring someone to use their natural speaking voice. As movies say, any similarity to existing people is completely coincidental from a legal perspective.

From a moral perspective, I can’t believe that people are trying to argue that someone’s voice should be protected under law. But that’s a personal opinion.


> We’re talking about OpenAI hiring someone to use their natural speaking voice.

How do you know?


They said so, and it’s what I would have done. I have no reason not to believe them.

Unfortunately a commenter pointed out that there’s legal precedent for protecting people’s voices from commercial usage specifically (thanks to a court case from four decades ago), so I probably wouldn’t have tried this. The cost of battling it out in the legal system is outweighed by the coolness factor of replicating Her. I personally feel it’s a battle worth winning, since it’s bogus that they have to worry about some annoyed celebrity, and your personal freedoms aren’t being trodden on in this case. But I can see why OpenAI would back down.

Now, if some company was e.g. trying to commercialize everybody’s voices at scale, this would be a different conversation. That should obviously not be allowed. But replicating a culturally significant voice is one of the coolest aspects of AI (have you seen those recreations of historical voices from other languages translated into English? If not, you’re missing out) but that’s not what OpenAI did here.


Do you always believe everything a corporation tells you?

If so, I have a bridge you might be interested in buying


No. But in this particular case, there are two factors that make that irrelevant for me. One, I would have made their same mistake. (If I was Sam, I too would have found it a really cool idea to make GPT have the voice of Her, and I too would not have realized there was one dumb court case from the 80s standing in the way of that.)

Two, it’s bogus that conceptually this isn’t allowed. I’m already anti-IP — I think that IP is a tool that corporations wield to prevent us from using "their" ideas, not to protect us from being exploited as workers. And now this is yet another thing we’re Not Allowed To Do. Great, that sounds like a wonderful world, just peachy. Next time maybe we’ll stop people from monetizing the act of having fun at all, and then the circle of restrictions will be complete.

Or, another way of putting it: poor Scarlett, whatever will she do? Her voice is being actively exploited by a corporation. Oh no.

In reality, she’s rich, powerful, and will be absolutely fine. She’d get over it. The sole reason that she’s being allowed to act like a bully is because the law allows her to (just barely, in this case, but there is one legal precedent) and everyone happens to hate or fear OpenAI, so people love rooting for their downfall and calling Sam an evil sociopath.

Someone, please, make me a moral, ethical argument why what they did here was wrong. I’m happy to change my mind on this. Name one good reason that they shouldn’t be allowed to replicate Her. It would’ve been cool as fuck, and sometimes it feels like I’m the only one who thinks so, other than OpenAI.


"This is perfectly legal!"

Actually, there's a similar court case from 1988 that creates legal precedent for her to sue.

"That's just one case! And it's from 1988! That's 36 years ago: rounded up, that's 4 decades!"

Actually, there's a court case from 1992 that built on that judgement and expanded it to define a specific kind of tort.

"That's bad law! Forget the law! I demand a moral justification."

Anyway, asking a person if you can make money off their identity, them saying no, and you going ahead and doing that anyway seems challenging to justify on moral grounds. I don't think you're willing to change your mind, your claim notwithstanding.


If you approach a debate from a bad faith standpoint, don’t be surprised when the other person doesn’t change their mind. "I think you’re a liar" is a great way to make them nope out.

Which is a shame, since you had a decent argument.

Except it isn’t. Again, you’re acting like OpenAI tried to profit off of Scarlett. They tried to profit off of the portrayal she did in the movie Her. These are not the same thing, and treating them as interchangeable is some next level moral rationalization. One is taking advantage of someone. The other is what the movie industry is for.

Now, where’s this case from 1992 that expended and defined the scope of this?


> Except it isn’t. Again, you’re acting like OpenAI tried to profit off of Scarlett. They tried to profit off of the portrayal she did in the movie Her.

Ahhh... so you admit OpenAI has been shady, but you argue they're actually ripping of Spike Jones not Scarlett Johansson?

HEH. The people who say Sam is shady aren't really interested in this distinction.

(And you're wrong, both ScarJo and the film own aspects of the character they created together.)


> Again, you’re acting like OpenAI tried to profit off of Scarlett. They tried to profit off of the portrayal she did in the movie Her.

From her statement:

> I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and Al. He said he felt that my voice would be comforting to people.

So, they wanted to profit off of her voice, as her voice is comforting. She said no, and they did it anyway. Nothing about, "come in and do that song and dance from your old movie."

> where’s this case from 1992

https://news.ycombinator.com/item?id=40435928


> If you approach a debate from a bad faith standpoint, don’t be surprised when the other person doesn’t change their mind.

Yeah, so stop doing that then.


> I have no reason not to believe them.

Seems you mut have reason to want to believe them.

Otherwise you'd have noticed all the reasons not to.


> then why should OpenAI not find a similar voice and use that instead?

That's assuming they did, right now they're asking us to pretty please trust them that their girlfriend from Canada is really real! She's real, you guys! No I can't show her to you.


Agree. And what about people who look similar to SJ? Are they precluded from acting jobs, simply because SJ became an actor first?


I encourage you to look through this case: https://en.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.


1. The case is from 1988. That’s the year I was born. Societal norms are in a constant state of flux, and this one case from 36 years ago isn’t really an indication of the current state of how case law will play out.

2. Ford explicitly hired an impersonator. OpenAI hired someone that sounded like her, and it’s her natural voice. Should movies be held to the same standard when casting their actors? This is about as absurd as saying that you’re not allowed to hire an actor to play a role.


Midler is actually quite similar. Midler didn't want to do a commercial, and refused an offer, so they hired a lookalike that fooled her friends. The appellate court held that Ford and its advertising agency had "misappropriated" Midler's voice.

Waits v. Frito Lay, Inc was '92, and cited it. They used a Tom Waits-sounding voice on an original song, and Waits successfully sued:

> Discussing the right of publicity, the Ninth Circuit affirmed the jury’s verdict that the defendants had committed the “Midler tort” by misappropriating Tom Waits’ voice for commercial purposes. The Midler tort is a species of violation of the right of publicity that protects against the unauthorized imitation of a celibrity’s voice which is distinctive and widely known, for commercial purposes.

https://tiplj.org/wp-content/uploads/Volumes/v1/v1p109.pdf

Of course, who knows what a court will find at the end of this. There is precedent, however.


Thank you. I didn’t know it was similar specifically for voices in commercial use.

That’s annoying, but we live in a country with lots of annoying laws that we nonetheless abide by. In this case I guess OpenAI just didn’t want to risk losing a court battle.

I still think legal = moral is mistaken in general, and from a moral standpoint it’s bogus that OpenAI couldn’t replicate the movie Her. It would’ve been cool. But, people can feel however they want to feel about it, and my personal opinion is worth about two milkshakes. But it’s still strange to me that anyone has a problem with what they did.


I was born in 1983 and it is wrong to make profit off of someone else's art without their permission. It isn't strange at all. This includes using an impersonator. This excludes parody intentions.

So the overall argument isn't strange, you just disagree without having articulated exactly what biases you to disagree. It is moral disagreement ultimately.


> OpenAI hired someone that sounded like her, and it’s her natural voice.

They say so, yes. Seems like they didn't want to go through discovery in order to prove it.


> The case is from 1988. That’s the year I was born. Societal norms are in a constant state of flux, and this one case from 36 years ago isn’t really an indication of the current state of how case law will play out.

Correct, while Midler presents a similar fact pattern and is a frequently taught and cited foundational case in this area, the case law has evolved since Midler, to an even stronger protection of celebrity publicity rights, that is even more explicitly not concerned with with the mechanism by which the identity is appropriated. Waits v. Frito Lay (!992), another case where voice sound-alike was a specific issue, has been mentioned in the thread, but White v. Samsung Electronics America (1993) [0], while its fact pattern wasn't centered on sound-alike voice appropriation, may be more important in that it underlines that the mechanism of appropriation is immaterial so long as the appropriation can be shown:

—quote—

In Midler, this court held that, even though the defendants had not used Midler's name or likeness, Midler had stated a claim for violation of her California common law right of publicity because "the defendants … for their own profit in selling their product did appropriate part of her identity" by using a Midler sound-alike. Id. at 463-64.

In Carson v. Here's Johnny Portable Toilets, Inc., 698 F.2d 831 (6th Cir. 1983), the defendant had marketed portable toilets under the brand name "Here's Johnny"--Johnny Carson's signature "Tonight Show" introduction–without Carson's permission. The district court had dismissed Carson's Michigan common law right of publicity claim because the defendants had not used Carson's "name or likeness." Id. at 835. In reversing the district court, the sixth circuit found "the district court's conception of the right of publicity … too narrow" and held that the right was implicated because the defendant had appropriated Carson's identity by using, inter alia, the phrase "Here's Johnny." Id. at 835-37.

These cases teach not only that the common law right of publicity reaches means of appropriation other than name or likeness, but that the specific means of appropriation are relevant only for determining whether the defendant has in fact appropriated the plaintiff's identity. The right of publicity does not require that appropriations of identity be accomplished through particular means to be actionable. It is noteworthy that the Midler and Carson defendants not only avoided using the plaintiff's name or likeness, but they also avoided appropriating the celebrity's voice, signature, and photograph. The photograph in Motschenbacher did include the plaintiff, but because the plaintiff was not visible the driver could have been an actor or dummy and the analysis in the case would have been the same.

Although the defendants in these cases avoided the most obvious means of appropriating the plaintiffs' identities, each of their actions directly implicated the commercial interests which the right of publicity is designed to protect.

–end quote–

> Ford explicitly hired an impersonator. OpenAI hired someone that sounded like her, and it’s her natural voice.

Hiring a natural sound-alike voice vs. an impersonator as a mechanism is not the legal issue, the issue is the intent of the defendant in so doing (Ford in the Midler case, OpenAI in a hypothetical Johansson lawsuit) and the commercial effect of them doing so.

[0] https://law.justia.com/cases/federal/appellate-courts/F2/971...


Nice write up, thanks.

Unrelated, but as someone who came along into this world after Carson's Tonight Show, I had no idea that that moment from The Shining was a play on that. Today's lucky 10,000.


You keep harping about the "forty-year-old law!" (actually only 36), as if that meant it were somehow bad or irrelevant.

So I guess you wouldn't mind if someone killed you, since laws against murder are much older than that? Shit, outmoded old boomer thinking, amirite?

Wow, when you realise how you're coming off here...


>OpenAI's mistake was caving to SJ. They should have kept Sky and told SJ to get lost. If SJ sued, they could simply prove another voice actor was used and make the legitimate argument that SJ doesn't have a monopoly on voices similar to hers.

Yes, they should have not reached out again, but now they are screwed. In no way will they want a trial and associated discovery. SJ can write her own ticket here.


OpenAI caved immediately because they knew they would lose a lawsuit and be looking at a minimum of an 8 figure payout.

Voice impersonation has been a settled matter for decades. It doesn't matter that they used another actress. What matters is that they tried to pass the voice off as SJ's voice several times.


> OpenAI's mistake was caving to SJ. ... If SJ sued, they could simply prove another voice actor was used and make the legitimate argument that SJ doesn't have a monopoly on voices similar to hers.

Or... hear me out... maybe they couldn't prove that, which is why they caved. Caved within a day or so of her lawyers asking "So if it's not SJ's voice, whose is it?"


Can they make that claim if SJ voice exists in the training data before fine tuning? We don’t know what they train on


If they told the voice actor to try to impersonate SJ, then Scarlett does have a case.

That may not be how it should work, but it is very much how the law currently works.


Not quite. All the had to do was tell themselves in discoverable email that they were going to seek out someone who sounded like ScarJo, or emails to the recruiter saying they wanted someone to mimic the Her voice.


See Waits vs. Frito Lay.


In that case, as I understand it, the voice actor intentionally mimicked Waits, purposefully using his intonations, style of speech, and phrasing, all of which were not natural to the voice actor. He was intentionally mimicking Waits. I doubt the same claim can be made of the Sky voice actor.


Doesn't matter if the voice was natural or not if there are emails at OAI saying "find us someone who sounds just like ScarJo." I suspect there are and that's why SamA turned pussy and ran.


OpenAI had the opportunity to prove as much in court and chose not to.


Sky sounded quite different to SJ. Them taking the voice off means they are accepting guilt it seems.


I wonder how long this thread will last on HN...

How much influence does @sama have around here nowadays?

For the record, I was never impressed with him - I am not aware of single consequential thing he has done or built other than take the credit for the fine work of the AI scientist + engineers at OpenAI. It feels like the company is just a vehicle for how own ambition and legacy, not much else.


Hacker News is famously editorially independent from YC-affiliated people, and dang has said that specifically avoids killing threads involving YC people/companies (not that Sam Altman is YC-affiliated anymore).


The thread is rightly being knocked off by the mods because there’s zero substance here. It’s a follow-on thread (knock one) to a public uproar (knock two) about something that isn’t representative of a new phenomenon (knock three). This isn’t what HN is for.


It's likely getting flagged organically (and due to the ratio of comments to upvotes, getting penalized by the flame war detector), but not due to a vast YC conspiracy.


Hm? You and I agree. There’s no conspiracy here. This is "bog standard moderation", as Dan would say.

Look at it this way: if the community didn’t flag it, it would be the mods’ duty to get this one off the front page. So whether it was the community or the mods is incidental.


I flagged the submission because it's flamebait. It's not intellectually rewarding. It's not suitable for HN. HN is not an advocacy platform.


> I wonder how long this thread will last on HN

Users flagged it and it also set off the flamewar detector. I don't think we'd turn the penalties off on this one because because this article is derivative of the threads HN has already had on the recent things - threads like these:

Statement from Scarlett Johansson on the OpenAI "Sky" voice - https://news.ycombinator.com/item?id=40421225 - May 2024 (970 comments)

Jan Leike Resigns from OpenAI - https://news.ycombinator.com/item?id=40363273 - May 2024 (391 comments)

Ilya Sutskever to leave OpenAI - https://news.ycombinator.com/item?id=40361128 - May 2024 (780 comments)

Edit: also OpenAI departures: Why can’t former employees talk? - https://news.ycombinator.com/item?id=40393121 - May 2024 (961 comments)

Those were huge threads!

Sometimes media articles are driven by the topic getting discussed on Hacker News in the first place. That is: major HN thread -> journalist takes notice -> article about topic -> HN user submits article -> another HN thread—but now it's a repetitive one. We don't need that feedback loop, especially because the mind tends to resort to indignation to make up for the lack of amusement in repetitive content (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...), and the earlier threads have been indignant (and repetitive) enough already.

> How much influence does @sama have around here nowadays?

Zero. He never asked for any change about anything HN-related even while he was running YC, and certainly not since then. Btw Sam was the person who posted https://www.ycombinator.com/blog/two-hn-announcements/.

> For the record, I was never impressed with him

(I'll add a personal bit even though that's usually a bad idea... I remember hearing this kind of comment about Sam going back to the Loopt days. My theory is that it had to do with pg praising him so publicly—I think it evoked a "why him and not me?" feeling in readers. The weird-ironic thing is that the complaint has only grown as Sam has achieved more. Running OpenAI through the biggest tech boom since the iPhone is...rather obviously massive. I think if Sam unifies gravity into quantum theory, brokers peace in the middle east, and cures cancer, we'll still be hearing these complaints—because they're not really grounded either in objective achievement or lack of it. It's some kind of second-order phenomenon, and actually rather interesting. At least if you aren't Sam!)


>My theory is that it had to do with pg praising him so publicly—I think it evoked a "why him and not me?" feeling in readers. The weird-ironic thing is that the complaint has only grown as Sam has achieved more...running OpenAI through the biggest tech boom since the iPhone is...rather obviously massive. I think if Sam unifies gravity into quantum theory, brokers peace in the middle east, and cures cancer, we'll still be hearing these complaints—because they're not really grounded either in objective achievement or lack of it. It's some kind of second-order phenomenon, and actually rather interesting. At least if you aren't Sam!

You're right, posting this was a bad idea. It reads like a "neener neener you're just jealous" defense of someone you happen to like.


I can see how it might read that way! I just think the phenomenon is a curious one. Sometimes I post for that reason.

There is, however, a more dominant rule, which is never to contradict an angry crowd, because doing so only produces more of the same. I break that rule sometimes but not often.

(Edit: s/mob/crowd. I realized on my bike ride hours later that 'mob' was too harsh.)


> I can see how it might read that way!

But you just cannot see how it perhaps reads that way because it actually is precisely that way?


Indeed not. I think most people who have explored their own feelings of envy enough to notice how powerful they can be will read my comment closer to the way I intended it.


Oh yeah, sure, we all have those and they are indeed quite powerful. But still, your original

> > > I think if Sam unifies gravity into quantum theory, brokers peace in the middle east, and cures cancer, we'll still be hearing these complaints—because they're not really grounded either in objective achievement or lack of it.

...certainly reads as if you thought that just because he might do a few good things, that would make all his (presumed) prior evil acts go away / be the figments of jealous imaginations. Would you say the same about, say, Hitler[1] -- if he unified gravity into quantum theory, brokered peace in the middle east, and cured cancer, should we all agree he's a great guy? Would those of us who said "That was great, thanks, but he's still an evil asshole" just be "jealous"?

If not, why should it be any different with Altman?

[1]: And no(, as I'm sure you know), that's not how Godwin's law works.

___

Side note: And I still find it rather sus that the other article, the one that came closest to exonerating him / them, was on the front page for at least twelve hours while this one (apparently, according to other commenters who had followed it) was for max two. "A coincidence that looks aforethought", as the old Swedish saying goes; it certainly didn't look less flamewarry than this, judging from the contents. But if it really was just due to the algorithm, a manual override (either way, bumping this or stomping that) might have improved at least the optics.


I figure we've each made our points about envy and jealousy and whatnot but I feel like I need to address the "sus" business. I explained what happened with the current thread here: https://news.ycombinator.com/item?id=40437018 - users flagged it and it set off the flamewar detector.

The difference with https://news.ycombinator.com/item?id=40448045 is that the latter story contained Significant New Information (SNI) relative to other recent threads. That's the criterion we apply when deciding whether or not to override penalties (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...). It doesn't have to do with who an article is for or against; it has to do with not having the same discussions over and over.

> a manual override [...] might have improved at least the optics

Sure, and we often do that (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...), but in this case it didn't cross my mind because the current thread was so obviously derivative of previous discussions that had been on HN's front page for 18+ hours in recent days.

And in any case the next day it flipped back and this story spent 16 hours on the front page:

Leaked OpenAI documents reveal aggressive tactics toward former employees - https://news.ycombinator.com/item?id=40447431 - May 2024 (515 comments)

... so I think we're good on "optics". The important point is that the last link (the vox.com article) contained SNI, whereas the slate.com article was a copycat piece piggybacking on other reporting . In the case of a Major Ongoing Topic (MOT) like this one, that's the key distinction: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


Too late to edit, so adding here: Not that I'm saying Altman is anywhere near Hitler-level evil, of course. Just reducing the principle ad absurdum.


> (I'll add a personal bit even though that's usually a bad idea... I remember hearing this kind of comment about Sam going back to the Loopt days. My theory is that it had to do with pg praising him so publicly—I think it evoked a "why him and not me?" feeling in readers. The weird-ironic thing is that the complaint has only grown as Sam has achieved more. Running OpenAI through the biggest tech boom since the iPhone is...rather obviously massive. I think if Sam unifies gravity into quantum theory, brokers peace in the middle east, and cures cancer, we'll still be hearing these complaints—because they're not really grounded either in objective achievement or lack of it. It's some kind of second-order phenomenon, and actually rather interesting. At least if you aren't Sam!)

I'm not saying he did not achieve anything significant, but its not clear what those things are, other then having the backing of PG and others.

I'm older and have played the game of corporate game of thrones. I have seen far too many selfish sycophants rise to leadership, only to eventually make things worse than better by using their positions as a platform for their own self interest. It's a big reason why companies like Boeing and GE become hollow shells dependent on government assistance, while companies like Costco and Alcoa last for a long time.

So I want to know what exactly sama has done to deserve prestige and recognition the he has. Bc right now, it looks like cult of personality.


> I think if Sam unifies gravity into quantum theory, brokers peace in the middle east, and cures cancer, we'll still be hearing these complaints—because they're not really grounded either in objective achievement or lack of it.

I suspect in such a case people may say that Sam's just using the work of other people or his employees. But then again I know nothing much of him personally and hence wouldn't really want to pick a "side".


> Running OpenAI through the biggest tech boom since the iPhone is...rather obviously massive.

Yeah, creating massive hype about regurgitating others' thoughts is kind of similar to becoming the warden of the world's largest digital pris...eh, walled garden.

They're both massive somethings, all right.


Couldn't this be said for most CEOs? They're never really the ones doing the dirty work.


Not all CEOs are terrible.


I love how we went from questioning copyright & licensing to "GPT vs Google, which one is better". To every artist or engineer out there who contributed to the general knowledge: you lost, everything you've ever done to help other people is now part of the models and there's nothing you can do to take it back. What even happened to the copyright strikes artists were supposed to bring up against these AI companies? That seems like 100 years ago :)


There's currently like 10 lawsuits against generative AI companies that are working through the courts including the one from Sarah Andersen, Kelly McKernan, and Karla Ortiz, one from Getty Images, one from the Author's Guild, and one from the New York Times. It should be shocking to nobody that lawsuits take time to litigate, and until the court settles the questions at hand, Open AI and its ilk are operating in a legal gray area.


> and until the court settles the questions at hand, Open AI and its ilk are operating in a legal gray area.

My understanding of western law is that things are ok unless law forbids it. So they are operating in an area that under _current_ laws is ok but because of what may be at stake many wish the current laws were different and are willing to use litigation and lobby efforts to that end.

This is NOT IN REPLY TO YOU but a general observation: Imagine the litigation that will happen when brain implants enable brain to brain sharing sensations and thoughts. Imagine the horrible copyright abuse! How will the publishing industry and sports industry and Hollywood control the rampart piracy?!?


Why are we imagining a hypothetical situation in the context of talking about things that are currently happening? It's an interesting thought experiment but it's kind of irrelevant because brain implants are nowhere near that level and as far as I know, freedom of thought is already part of western law. I am not a lawyer though, I just think we can think about the actual damages to real people rather than make shit up.


I put the "NOT IN REPLY TO YOU" since I meant that as a thought experiment of a possible future that where a similar situation may arise. Notice it is not freedom of thought that is in question. What is in question is freedom to share your sensations with others. You are watching a live football game and you share the sensations (what you see / hear / smell) with friends and family who are not there, etc. add to this technology that enables perfect memory of you sensations and instantly sharing them. In that possible future many will litigate and complain that their copyright and broadcast rights are being violated and they must be compensated much like what is happening with generative AI today. Sure this is scifi today. So were "flying machines" and "moon visits" and magic of our global communication pocket devices, etc. Gpt4o is a bunch of matrix math being done on high purity ore and refined sand powered by the sun / wind / splitting atoms / ... A century back few would believe it. Even a decade back, any predictions about a real AI like gpt4o working in just a decade, would you believe such predictions?


> Sure this is scifi today. So were "flying machines" and "moon visits" and magic of our global communication pocket devices, etc.

Well, "moon visits" are well on their way to being SF again. Or old fairy tales. :-(


I'm happy to contribute to the general knowledge. It's better than being forgotten and having no impact.


Oh you will be forgotten. All your knowledge will come from some trademarked AI bot and you won't even get a linkback


> Oh you will be forgotten.

Not your parent comenter but please allow me to enlighten you.

https://en.wikipedia.org/wiki/Walter_Bright

https://en.wikipedia.org/wiki/D_(programming_language)

When in doubt, always double-check who you're replying to in HN. We are lucky to have many great minds around.


The point is still salient. People who don’t have Wikipedia links will be forgotten. Though I guess maybe that’s always been true.


Your legacy can continue as part of the AI trained on your output.

What would you prefer? Would you want people to remember your name? Your face? Your voice? Which people? How often should they have to remember you? For how many thousands of years?


> Your legacy can continue as part of the AI trained on your output.

That is one incredibly dense dystopian sentence right there. Damn.


That's like saying that your happy someone person who plagiarized your work got famous because you live on in their "trained output"


In this specific case I asked ChatGPT, which said "Walter Bright is the creator of the D programming language. He's a talented programmer!", so maybe he specifically won't be forgotten. Most of the rest of us probably will, though.


I have no idea if I am talented or not. I do know that I've spend a lot of time programming, and it's inevitable one would get better at it over time. I also learned from being around people who were really good, and were kind enough to help me.


I personally don't know if you are either, but I have had a long chat with Andrei about D. His warm praise and enthusiasm makes me think you are!


I think your answer belies an an assumption that is important in this context.

You are assuming that who came up with knowledge is important. I think Walter was saying that he would rather the knowledge not be forgotten, not that he was the one who provided it.


unless you find a way to poison the AI so that it remembers you :)


> I love how we went from questioning copyright & licensing to "GPT vs Google, which one is better".

Have we? Certainly the people litigating haven't. And as this article notes, actors' newest contract does have protections against AI. SAG-AFTRA's press release states [0] they are pursuing legislation. That could be bluster or could go nowhere, but certainly people haven't given up.

[0]: https://www.sagaftra.org/sag-aftra-statement-regarding-scarl...


>there's nothing you can do to take it back

Given the fact that many, many people make their software MIT licensed (or rather, do whatever, I don't care license), I think most of us will be ok with that :)


I think that's a naive take. Derivative works are nothing new. What's new is that the price of this work is much lower with a tradeoff in quality. Even human copycats are still better than generative AI by miles.

The artist is not defined by their past work or other miscellaneous artifacts, but their perspective and creativity. This too is not a revelation. AI has nothing to do with this. It's just a means to an end.

The real problem is the legal stuff. Everything else is hype.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: