Hacker News new | past | comments | ask | show | jobs | submit login
Dadamachines Doppler: FPGA open music hardware (cdm.link)
79 points by shams93 on March 23, 2019 | hide | past | favorite | 47 comments

This is cool, but I question that the addition of a FPGA will really make that much of a difference to make it stand out against the other products on the market that can do realtime DSP, except maybe for battery-operated devices (where maybe the FPGA can be more power-efficient than a general-purpose CPU?)

I would have expected the raspberry pi 3 to meet most needs for this market. Quad cores clocked at > 1GHz, USB ports for connecting a controller, analog audio out (or SPI/I2C to a better DAC if you're not happy with the quality), and even HDMI if you want to drive a display, or MIPI for embedded displays.

Is it not easy to drive headless, and that's why developers need a different device for this? Or does the FPGA really unlock that much extra possibilities beyond what a quad-core, 1.2 GHz ARM SoC w/ a neon fpu can achieve? Or is there a demand for lower-power devices than the rpi for audio processing? Or am I missing something else entirely?

A month back I took a raspberry pi 3 B+, wired a midi keyboard to it, and then launched an open source virtual synth, Helm. I was extremely disappointed by the perf: it was actually unplayable. So clearly there's still a lot that can be done in this domain, and there's probably a reason why already-commodified hardware like the rpi doesn't cut it.

I'm going to start with this. A CPU can offer you way more in a practical sense, because of the ease with which you can handle so many things, and that fact that there is so much more development and expertise that you can tap. FPGAs aren't super hard, but I don't think it's a leap to say they're still pretty arcane.

One thing I would say, is real time by design? That always seems to be one complaint of people when you are talking about dedicated hardware vs running things on a computer. People can argue until the sun goes down about what constitutes 'real time' in a cpu, as to how your RTOS works, how you are guaranteeing latency and response time. You're rarely if ever going to have that issue in FPGAs, you get control down to the clock cycle level. The downside is that it's like assembly, you have to specify it down to that level.

If your task consists of a bunch of independent workloads, FPGAs are great at that.

I would say, given the same end result, the FPGA can probably do it better, more efficiently, probably beat it in most important metrics. But the development getting to that point? If the CPU can physically do it, it's going to be a fraction of the development time, near guaranteed. Add on top of that, data handling, content, usability, things that are an afterthought on a computer are very non-trivial on an FPGA.

Check out the hype on the Waldorf Kyra (in prototype this was know as the Valkyrie). The Novation peak is another recent FPGA based device. Take a look at https://novationmusic.com/peak-explained for a bit more of an explanation why FPGA is necessary.

As someone who is pretty hyped up for the Kyra, this stuff is really just hype.

Hype, in the sense that it's anything more than a way to hit a higher level of performance at a smaller budget of dollars and/or watts. Which, is a fine goal. But you can have a gazillion more oscillators and way better resolution with a desktop-class processor. None of these new FPGA synths will sound better than Omnisphere - but they will sound way better than the previous generation of hardware synths.

There's kind of a backlash against laptops in music right now - ex: modular and the DAW-less trend - so tons of people want hardware, and plenty of them are yelling "FPGA! FPGA!" like it makes a qualitative difference. Seriously, people on synth message boards are like, "what would this sound like if we did it with FPGA?"

I admit that sometimes there is something intangibly great about using hardware versus a laptop - but it's mostly about the interface, not the sound engine.

> Seriously, people on synth message boards are like, "what would this sound like if we did it with FPGA?"

Ah, the blockchain of digital music.

Looking at the Novation Peak that’s just a bunch of marketing materials, from an engineering standpoint it’s utter garbage unless they know something everyone doesn’t.

Oversampling is a standard technique for modeling analogue synthesizers, but you don’t need to go crazy with it, you don’t need to run your DACs at 24 MHz to see the benefit (you can run them at 48 kHz just fine, Novation’s marketing materials mentioning that DACs “often” have alisaing issues is just weaseling—aliasing issues not hard to solve), and nothing about the specs tells me that they would be pushing the performance of existing DSPs.

So no, FPGA is absolutely not necessary. Audio, even high-end virtual analogue synthesizers, don’t require a lot of computational power. It doesn’t matter whether you have an oversampled DAC at 24 MHz or a regular old 48 kHz DAC, if you send them the same signals you won’t be able to tell them apart.

Keep in mind you can oversample in the digital domain.

Running the oscillators at a ridicilously high rate allows the use of non-bandlimited, trivial algorithms. Band-limiting is not a solved problem, all the algorithms get messy in different dynamic scenarios. As to why design a custom FPGA bitstream for this? It could actually be a reasonable fit if you actually make the waveform generators directly in the LUTs and latches and run the whole thing at 24MHz.

Can you give an example of one of these algorithms you’re talking about? I don’t understand why these would work at 512x but not at, say, 4x or 16x.

At 4x you would need something like poly-blep with 24/32-bit fixed or floating point support to generate a nice bandlimited sawtooth. Features like oscillator sync can still cause problems. I recall hearing at Clavia HQ that the original Nord Lead used 16x oversampling with a naive generator (subject to hearing it once and my memory over 15 years). Upping to 512x can be reasonable on a fpga, where you can aggressively minimize wordlength and match the DAC. Is there any advantage over say 32x? Maybe not, but feeding the DAC 1:1 can save some complexity.

Edit: So I envision the ultra high rate sawtooth as essentially a n.m bit fixed point accumulator wrapping around by itself, with the n-bit (4? 8?) top part going over to the DAC as a kind of directly generated DXD format signal.

So the only advantage is that you can feed the DAC 1:1?

FPGAs are difficult to program, that’s a pretty steep cost to save you the trouble of downsampling your signal, something which is already pretty damn easy to do.

16x oversampling is fine, DSPs are fine. DXD is snake oil.

Sure you can make an equally good synth without fpgas or dxd. I'm not trying to sell you this design. However it's educational to think about when and how it makes technical sense. I do agree that marketing turns it into brain damaged bullet points pretty often.

you've got to think outside the box. with fpga, we can get into new algs like variable-sized lattice gas synthesis.

Here are two audio Rasperry Pi 3 based, no-FPGA projects.

Zynthian: http://zynthian.com/

Patchbox OS: https://blokas.io/patchbox-os/

Thanks! I was hoping someone would share resources like these. I'll try out Zynthian w/o the extra hardware this weekend :-)

My takeaway from the associated links in this thread is that the key benefit to an FPGA is that it runs at a much higher rate than DSP-based technology, and this has a direct impact on the clarity of sound.

I saw some links in this thread, but they look like marketing materials to me. Aliasing is more or less solved problem, and the claim that running a DAC at 24 MHz makes it “better” in any measurable way seems ridiculous at face value. I don’t know what is actually going on, but as far as I know they’re just taking a delta-sigma DAC and talking big talk about the oversampling. In truth, oversampling here is usually just a boring engineering choice where you trade off bandwidth and SNR.

If the Novation Peak is running an 8-bit DAC at 24 MHz, they might be making it perform equivalently to a 16-bit DAC at 48 kHz—note the ratios. This is not anything new, it’s just something old that happens to have big numbers in it.

It's the extreme low latency, while yes you can do a lot with just a pi, still we are talking around 20ms latency. With this setup we are talking about digital hardware synths running at sub 2ms latency.

The way I understand it is that you can do a lot of DSP related work in the FPGA such that a Microcontroller board can do more sophisticated stuff than otherwise.

For makers and hobbyists, the power efficiency is less of an issue, in general. But FPGAs may enable a battery powered application where it wouldn't be possible otherwise.

Also, for learning advanced FPGA programming, audio and DSP seem to be a fruitful playground.

I was wondering the opposite direction.

If there's an FPGA, why have a general purpose CPU?

From the article:

"That frees up the FPGA to do audio only."

The engineering challenges here have already been solved with DACs, and FPGAs really bring nothing new to the table. Anything you want to do with audio signal processing is going to involve a lot of multiplication and table lookups. Sure, you can do that with an FPGA, but what you’ve basically got are pieces of DSP stuck in your FPGA that you can wire together with Verilog instead of C. Not exactly something to write home about, since it’s not like you look at your off the shelf DSPs these days and say, “Gosh, these definitely have enough computational power for my needs but I really wish that they were more expensive and more difficult to program.”

The problem is that for most synthesizers and music applications, any problem you have is simply not going to require that much computational power by today’s standards.

A good way to make digital music toys is to put all your signal processing on a DSP, and then run your UI and random other code on a standard microcontroller. Microcontrollers are cheap, and this way your real-time DSP code isn’t fighting for memory or time slices with anything else. Some people are asking whether you can just use a Raspberry Pi. I’m sure the hardware is capable of cool things with audio, but I’m not sure that you guys would enjoy wrangling Linux into giving you reliable real-time performance.

Some people are asking whether you can just use a Raspberry Pi

Can you point us to what hardware (DSPs or DACs) we should use instead?

Can I use them as digital synths?

Can I use them for digital FX?

If I want to make a digital synth with lower latency than a RPi, what hardware should I use that won't be as hard as an FPGA to program?

The comment was mostly aimed at the poor state of audio on Linux. I think it’s important to understand what your goals are when you’re doing this. Are you building something? Will it be mass-manufactured? Does it need to be reliable? Does it need to be low-latency?

If you’re building something one-shot or playing around, RPi is great it’s just that real-time audio on Linux is a pain.

If you want to play with real-time, low-latency effects I would get a Mac or Windows PC and hook it up to a $100 USB audio interface.

If you want to mass manufacture a box that makes noise, at that point it starts to make more sense to research DSPs. They often have weird architectures and the toolchains might not be great. But I suspect that more and more audio stuff is moving to general purpose CPUs and microcontrollers.

Can this do recording, too? I'm wondering how to record and process rodent vocalizations in ultrasound (around 20khz) using the cheapest way possible, akin to the "Deep Squeak" project. There are some interesting applications in lab animal welfare, psychology and even training for land mine or TBC detection.

High frequency measurement microphones are very expensive. If there was a way to churn a high sampling rate ADC through an FFT inside an FPGA, that would help a lot and it may be simpler or cheaper than for more sophisticated measurements. But beyond basic things like Nyquist frequency I don't really know what I'd need in a microphone and the analog/digital backend...

Ask HN: For someone who has done a lot of microcontroller development, but never worked with FPGA's, does this look like a good platform for me to get my feet wet? I'm not sure I need FPGA for anything, but it would be interesting to at least learn the basics.

At the moment if I were to start with FPGAs I would probably get an TinyFPGA. It has an open source toolchain, and at the price and form factor, it could fit into a lot of projects, even as a slightly expensive customized port-expander.

But alas so many things out there to learn and dive into...

If you are not sure, you need it, you don’t. Development in low level languages is different paradigm than microcontroller. You can download Xilinx IDE and just simulate some code, that would be enough to know if you want to continue. Digilent is a good provider of hardware: https://store.digilentinc.com/fpga-development-boards-kits-f... If you like Intel (Altera), grab some board from Terasic.com. The 2 are popular and there’s every possible question asked or described in their blogs. There might be cheaper boards or some non-vendor IDEs, but good support and available information is crucial in your learning phase.

If you're interested in a general purpose FPGA I really like the Alchitry boards[1]. They're pretty straightforward and they have an IDE that simplifies the whole FPGA development process significantly. It runs on top of the FPGA's manufacturer's toolchain so when you feel like you need to move on to more complicated projects you can transition to tools like Vivado.

[1] https://alchitry.com/

The fact that it doesn't have an audio output on board by default seems like a really odd omission.

What kind of audio output exactly? How many? A hardwired headphone jack (of which there are various type) would take up space and be useless to many users.

Microcontroller-based audio projects often use i2s for audio. Some don't output anything and just listen.

Come again?

>2 DAC pins, 10 ADC pins

Pins 35 and 39, 2 channels of audio out.

I expect GP meant a 3.5mm audio jack (or similar).

Yes, I did.

I work with industrial controllers with analog outs on a daily basis, but when I wanted to build an audio board I had to do a bunch of research to find out what voltage range I needed to output, and what current I'd need to be able to source/sink.

An audio jack is a hardware API contract, basically: plug a wire in here and you will get audio out of it.

I don't understand why people would go so far as to be annoyed by the presence of a jack. Are you annoyed by the LED present on most project boards, if you're planning to seal up the board in an enclosure that means you'll never see it?

To be fair it would add to the BOM and probably impose additional limits on the board design (large footprint, through hole pins) but I'd argue that the beginner accessibility, and the classification of this board as for audio rather than just another assorted MCU/FPGA board, would be worth it.

Of course, the board designers seem to have felt otherwise.

I would be rather annoyed if a project board like this had a hard-wired audio jack.

The specs for this specific Cortex-M4 incarnation:


The DAC is only 12-bit.

Only 12-bit is fine, it’s 1 MSPS. DACs you can trade sample rate for precision.

Ah, that finally makes sense. Thanks for that.

To have an impression of the parallel signal processing power of this board, how many 16 bit multipliers fit on the FPGA?

Didn’t see it mentioned, but bela.io is also a good hardware platform that focuses on low latency. Definitely recommend it.

Anyone know an affordable microcontroller with a few good dac channels?

The STM32F4 Discovery board -- among others from ST -- has a decent audio DAC on board.


Check here for a Teensy that will fit your bill. https://www.pjrc.com/teensy/td_libs_Audio.html

An STM32 usually has a few DAC channels, but they often only have 8 or so bits. No idea how fast they are either.

Does "6 DSP Cores" imply only 6-note polyphony if developing tone generators?

no. does having one cpu mean 1-note polyphony when using a laptop to generate tones?

you can always mix in software.

well... you never know with these hobby DSPs. Once bitten, twice shy.

well, oscillators are pretty darned lightweight ...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact