> How much does maintaining the servers cost?
> It depends on the amount of traffic, but the minimum baseline is around several thousands of US dollars every month. This is expected as inference is very GPU intensive and a sufficient number of instances need to be spun up to handle thousands of requests coming in every minute. Everything is paid out of pocket.
Wow, impressive commitment for something that's free.
- Spot instances
- Aggressive autoscaling
- Micro batching
Can reduce inference compute spend by huge amounts (90% is not uncommon). ML, especially anything involving realtime inference, is an area where effective platform engineering makes a ridiculous difference even in the earliest days.
Source: I help maintain open source ML infra for GPU inference and think about compute spend way too much https://github.com/cortexlabs/cortex
This is not true. A _lot_ of AI applications use algorithms such as logistic regression or random forests and don’t need GPUs - partly, of course, because GPUs are so expensive and these approaches are good enough (or more than good enough) for many applications.
There are few key reasons why most realtime inference is done on the cloud:
- Scale. Deep learning models especially tend to have poor latency, especially as they grow in size. As a result, you need to scale up replicas to meet demand at a way lower level of traffic than you do for a normal web app. At one point, AI Dungeon needed over 700 servers to support just thousands of concurrent players.
- Cost. Related to the above, GPUs are really expensive to buy. A g4dn.xlarge instance (the most popular AWS EC2 instance for GPU inference) is $0.526/hour on demand. To hit $3,000 per month in spend, you'd need to be running ~8 of them 24/7. Prices vary with purchasing GPUs, but you could expect 8 NVIDIA T4's to run around $20,000 at minimum, plus the cost of other components and maintainence. To be clear, that's very conservative--it's unlikely you'll get consistent traffic. What's more likely is you'll have some periods of very little traffic where you need one or two GPUs, and other high load periods where you'll need 10+.
3. Less universal of an issue, but the cloud gives you much better access to chips at lower switching costs. If NVIDIA releases a new GPU that's even better for inference, switching to it (once its available on your cloud) will be a tweak in your YAML. If you ever switch to ASICs like AWS's Inferentia or GCP's TPUs, which in many cases give way better performance and economics than GPUs, you'll also naturally have to be on their cloud.
However, there is a lot that can be done to lower the cost of inference even in the cloud. I listed some things in a comment higher up, but basically, there are some assumptions you can make with inference that allow you to optimize pretty hard on instance price and autoscaling behavior.
I have no reason to disbelieve it.
Being able to generate voices for games would enable a lot of interesting indie projects. IMO people should be paying more attention the market implications of products like this than to the social implications. There are a lot of projects that just aren't really feasible right now that could be if this kind of technology was more polished and generally available for commercial/self-hosted use. And in those cases, you don't even need to do inference, makers will likely be willing to mark up their scripts themselves.
Anyway I digress. Congrats, this is really cool!
People will absolutely suffer harm from this tech, but hey, think about the dollars that could be made! No, we should absolutely be paying more attention to the social implications.
I'm not primarily interested about the dollars, I'm interested in allowing communities to do creative things. I think people are looking at this tech like it's only going to be used for deepfakes, and they're underestimating the extent it's going to be used to create voice-acted game mods, animations, anonymization tools, and other creative/helpful projects.
If you're really worried about this stuff though, you can take some comfort in the fact that by far the worst examples on the site are of real-world voices. This is currently technology that as far as I can see is far more suited for generating new voices or voicing cartoon characters with well-defined patterns/inflections than it is for imitating the president.
We already have stories like https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice...
That said, as far as harms go, i dont think this is all that bad that it should preclude creative uses of this technology.
One, this tech absolutely could be used to fool someone. Not everyone will be listening with a critical ear. Played back over a phone or injecting a phrase or two in otherwise spoken samples will fool many people.
I guarantee you someone will be using this to make their own MLP episodes on YouTube specifically designed to scare children or get them to do awful things.
Models presumably get better over time. It really won't be too much longer until people will be able to fake celebrities, politicians, exes, authority figures, etc. As a fairly benign example, if I had this in high school you better believe I could have called to excuse some of my absences.
I agree, I love the idea of generating some decent voice lines for my own games projects, but this also introduces issues of the rights of the original voice actors.
If you train a model to mimic a performance given by an actor, then use that model and fire the actor, isn't that potentially really problematic? (Also, it draws parallels to the Luddites who were not anti technology, but wanted to ensure that technology wasn't used in a way that reduced worker quality of life.)
And yes, I think there are helpful ways this could be deployed. I'm gender fluid, and I'd love to be able to adjust my voice digitally, but we need to be thinking about how this could cause harm first.
The problem I have here is that it's already not hard to fool people. I don't think it's feasible for us to say that we're going to put something that could be highly beneficial on hold just because we don't want to deal with social education efforts that we kind of already need to tackle anyway. Per your example, if we get rid of deepfakes, it's not clear to me that Youtube is going to be any more safe. I already would not allow a child to browse Youtube unattended, people already generate the videos you're talking about.
And I know that people are putting this in a different category than general CGI, voice modulation, or consumer-grade apps like Photoshop. I'm not going to argue that it's necessarily wrong for people to be worried, but no matter how many times people tell me that this is fundamentally different, I still have not seen any serious evidence that this technology is going to be more dangerous than Photoshop, and I think it's going to be way easier to detect than a decent Photoshop job is. Photoshop's content-aware paste/fill tools are better than this example, and they arguably require less work to use.
And again... I'm sympathetic to concerns about moving too fast, but I just don't think there's any world, even if you could get rid of deepfakes entirely, where we don't need to be worried about media literacy and general skepticism. If people today don't realize that voices can already be convincingly faked, then that's a really serious problem, and if democratizing that ability causes society in general to become more aware of the potential of disinformation, then honestly that might even be a good thing that we should be encouraging.
So sure, concerns, but in my mind people are focusing on one particular implication that I don't think is particularly likely, and ignoring that responding to that concern is probably going to look the same no matter what our position on deepfakes is.
> If you train a model to mimic a performance given by an actor, then use that model and fire the actor, isn't that potentially really problematic?
I think that's a very complicated question. I would not assume that the loss of work for voice actors, who can shift into voice generation roles, is going to be a big enough downside that it overrules the upside of allowing ordinary people to start generating their own vtube avatars or commenting on and building on top of existing culture.
I've wondered about that angle as well. You can't put the genie back in the bottle, so maybe the best way to combat the threat of deepfaked misinformation is actually to take the opposite approach and make it as easy as possible for normal people to generate their own deepfakes; that way it becomes common knowledge that such things are possible (similar to how photoshop is common knowledge today).
And if you have to keep getting a person paid for something that a machine could do with (assuming, as per your post) 100% equal performance, that is not problematic? When the voice becomes as good as real actors, then yes of course they should become out of a job. Just like progress has been going on for thousands of years.
It seems to struggle more and more as the voices get less cartoony/exaggerated.
Oh, I somehow forgot all of the TF2 characters. Some of them do struggle (Medic the most, I think) but everyone else seems incredibly good.
And the Daria characters, too. Honestly, the vast majority of characters are already near-perfect.
I think some of the best voices they have are characters like Twilight, she shows a ton of promise. But as it stands right now, I would still at least hesitate to use Twilight's voice in a project unless I didn't have other options. Chrysalis's voice is good, but again, is an exaggerated cartoon character with a large amount of inflection. I would not use her voice in her current state without a lot of post-processing. Someone like the Spy I would consider to be unusable, it sounds to me like the character needs to clear their throat or something, it's got a lot of strange artifacts. I definitely would consider the 10th Doctor unusable, even for just a hobby project or a voice assistant.
But... I don't know, maybe this is subjective. I can't just tell you that what you're hearing is wrong, if you like the results then you like the results :)
And again, I don't want to detract from how impressive they are. They are incredibly impressive, particularly because of how characters like Chrysalis emote. Extremely promising. But I still think there's a difference between 'impressive' and 'believable deepfake'.
I've been seeing quite a few skits being posted on /r/tf2 (https://www.reddit.com/r/tf2/comments/kr374q/honestly_idk_i_...) and all of the voices sound pretty much perfect to me. But as you said, it's subjective.
If a voice could be copyrighted, or if this was a trademark issue or something, I strongly suspect that this site would not fall under fair use regardless of whether or not it was commercial. But again, IANAL, so I don't feel confident making any kind of strong claim about that either.
The audio content (which includes voices) of the source work is copyrighted, and a mechanical transform of that work (which deep learning to mimic the voices clearly is) would seem to be a derivative in at least the literal sense.
Bob: “Hello, John.”
John: “Oh, hello there, Bob.”
Bob: “Yes, hello. It's what I said. Why do you keep repeating what I say, John?”
John: “I didn't repeat you! I merely said hello, you dimwit!”
Bob: “There you go, being condescending again. Fuck you!”
John: “What? You're the one who started it!”
Try it yourself, or write something different. Either way, good fun!
- We have had perfect image manipulation capabilities for quite some time now. We have had written text manipulation capabilities for hundreds of years.
- People will continue to believe what they believe, whether there is deep fake video and audio or not.
A Voice Deepfake Was Used To Scam A CEO Out Of $243,000:
I just found a video on YT with an example of recreating this in Melodyne: https://youtu.be/1oQn66gvwKA
I wonder if this will lead to a resurgence of "moon man" style videos with well-known characters rapping extremely offensive lyrics.
There is definitely a sense of ‘who is that’ coming from their little minds that they are sometimes quite perplexed about. ‘It’s a computer’ is starting to feel like a cop-out answer as these things improve...
The obvious way to get around this is to keep this as the showcase and to pay some people to add their voices to the paid version. I imagine this would sell just based on being decent TTS with a wide range of voices, even when people don't know the voices offered.
You can find a couple of minutes of taking of anyone, so the security implications are huge!
Amazing toy! Thanks for "download" link, I'm creating a collection of GlaDOS phrases now.
Besides that, amazing results. Congratulations.
Not only that, but the creator seems cool and down to earth. Thanks for sharing, this is incredible work.
I can get about 90% of the quality of 15.ai currently. I think I could surpass 15.ai but not without some help.
Here's a sample from a TTS model + vocoder I released for it. I've no wish to deter the motivated, but it'd take a bit of figuring out how to set things up and you'd need to read the docs and code to get oriented :)
Links to the models are here:
Is originally trained on two novels read by the same narrator on LibriVox (ie in public domain)
As some of the audio is read in different accents to the main accent used, ideally the different accent audio would have been removed. Doing so would be expected to help with voice quality, reducing the overall amount used and, as a bonus, cutting training time too.
There's also a version in docker: https://github.com/synesthesiam/docker-mozillatts
And various Colabs too, which are fairly easy to get going with: https://github.com/mozilla/TTS/wiki/TTS-Notebooks-and-Tutori...
found an answer.
"There's no point in releasing a poorly done model, and to do so for the sake of popularity would be despicable. My goal is to achieve indistinguishability, which I certainly know is possible. Anything short of near-perfection is unacceptable.
I do plan to compile and publish my findings in the future, but nothing is set in stone yet. I know that the model can be improved even further, and I'd prefer to be as comprehensive as possible.
AI and ML users are massively benefiting from open source but too often refuse to release their data. It's like we're back in the middle ages and alchemy is back in style.
I was talking about ML in general, not just this project. See OpenAI and their latest release for example: no public product, no trained model. Just alchemy.