Hacker News new | past | comments | ask | show | jobs | submit login

Hey, one of the Suno founders/creators of Bark here. Thanks for all the comments, we love seeing how we can improve things in the future. At Suno we work on audio foundation models, creating speech, music, sounds effects etc….

Text to speech was a natural playground for us to share with the community and get some feedback. Given that this model is a full GPT model, the text input is merely a guidance and the model can technically create any audio from scratch even without input text, aka hallucinations or audio continuation.

When used as a TTS model, it’s very different from the awesome high quality TTS models already available. It produces a wider range of audio – that could be a high quality studio recording of an actor or the same text leading to two people shouting in an argument at a noisy bar. Excited to see what the community can build and what we can learn for future products.

Please let us know with any feedback, or if you’re interested in working on this: bark@suno.ai




This tech will be used by crooks to automate attacks. Generate the language using GPT-4 and the audio using Bark, and then start making phone calls. Because it’s open source, all you need is GPUs. This is not a criticism. I’m impressed and grateful for the openness. Everyone needs to wake up and recognize that these attacks are coming at us essentially right now.


yawn who cares? If it’s an issue, let law enforcement handle it. Everything has nefarious uses. Humanity marches on.


As technology gets more powerful more quickly, and the rule of law becomes more and more unable to prevent societal damage, this response becomes woefully inadequate.

For reference, look at how the societal damage of social networks has been handled: too little, too late. Same goes for RentTech.

But, I don't know the solution. The common computer has become so powerful that we cannot simply rely on inaccessible materials to prevent the danger of overly-powerful tech spreading too fast, as we do with bioweapons or traditional WMDs.

Fight tech with more tech, I suppose.


You'll care when it effects you.

We want to go back to New York in the 90s? Petty theft everywhere?


The technology already exists, shutting down a company and pretending that it doesn't doesn't solve the problem.


Like kitchen knives, which are used to end unhappy relationships. Is this really an argument we should be talking about? Wouldn't you feel silly presenting such an argument, to a household knive maker?


That comparison holds up better if you imagine a world without knives or sharp objects of any kind. Now you can suddenly do tremendous harm by wielding a pointy stick. I don't think it's reaching to point out the dangers you just introduced.

With great power comes .... ? Profit?


Except that world used to exist, back in the stone age, and we're all far better off now because we didn't choose to live in fear of the misuse of powerful tools.


I wouldn’t want to be around the first few guys with pointy weapons, but I am sure you would be fine.


Recommend to encode sub-audible tracking symbols in synthesize speech patterns that includes GPS, IP, timestamp, country of origin. We do that already in bootleg movies so we can apply similar methods in synthesizes speech.


It's open source, and even if it wasn't, it would likely be pretty easy to remove those (not that you would get GPS) from an audio file.


Hmm, I don't know about that. The data rate used by civilian GPS (L1 C/A) is only 50 bps. The symbols are normally spread over a couple MHz of bandwidth to make it possible to recover at levels below the thermal noise floor. I see no reason why the same thing couldn't be done at baseband, adding an imperceptible bit of extra noise to an audio signal.

Of course, you wouldn't encode real-time navigation data, but a small block of identifying text. Either way, though, someone without a copy of the spreading code isn't going to notice it or decode it. Given enough redundancy in both the time and frequency domains, removing it wouldn't be easy either.


The real problem is that bad actors would simply encode some other person's coordinates/metadata into the recordings they produce, and we'll have been trained by then to blindly accept the presence of these markers as strong evidence of guilt.


Nothing stops you from using closed source solutions for that.


How are the voices determined? Is there an option or is it just random/based on the prompts like "WOMAN"?


Amazing work so far! Do you have any sense about how difficult it would be to enable M1/M2 or CoreML support?


thanks, the model itself is a pretty vanilla gpt model based heavily on karpathy's nanogpt, so should not need too many bells and whistles to get it running on specific architectures. that said i have very little experience with platform specific development, so would looove some help from the community :)


Could you stick a torch.compile in the inference and training code, maybe gated behind a flag? This should help AMD/Nvidia performance (and probably other vendors soon) significantly.

PyTorch themselves used nanoGPT training as demo for this: https://pytorch.org/blog/accelerating-large-language-models/


A serious nod to Karpathy here. They could have chosen any other Transformer architecture, but chose perhaps the most reachable one - in the literal sense.


Would the same apply to a GGML port or are the architechtures too different?


I like the emphasis tags, it's something that is not seen with a lot of these transformer models.... things like [laughs] makes alot of sense... i could see where hundreds or possibly thousands of emphasis style tags could be added to support a vast array of intonations in human speech. i.e. [yells], [shouts], [cries], [crying], [whispers], [sarcasasm], etc




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: