I would have expected the raspberry pi 3 to meet most needs for this market. Quad cores clocked at > 1GHz, USB ports for connecting a controller, analog audio out (or SPI/I2C to a better DAC if you're not happy with the quality), and even HDMI if you want to drive a display, or MIPI for embedded displays.
Is it not easy to drive headless, and that's why developers need a different device for this? Or does the FPGA really unlock that much extra possibilities beyond what a quad-core, 1.2 GHz ARM SoC w/ a neon fpu can achieve? Or is there a demand for lower-power devices than the rpi for audio processing? Or am I missing something else entirely?
A month back I took a raspberry pi 3 B+, wired a midi keyboard to it, and then launched an open source virtual synth, Helm. I was extremely disappointed by the perf: it was actually unplayable. So clearly there's still a lot that can be done in this domain, and there's probably a reason why already-commodified hardware like the rpi doesn't cut it.
One thing I would say, is real time by design? That always seems to be one complaint of people when you are talking about dedicated hardware vs running things on a computer. People can argue until the sun goes down about what constitutes 'real time' in a cpu, as to how your RTOS works, how you are guaranteeing latency and response time. You're rarely if ever going to have that issue in FPGAs, you get control down to the clock cycle level. The downside is that it's like assembly, you have to specify it down to that level.
If your task consists of a bunch of independent workloads, FPGAs are great at that.
I would say, given the same end result, the FPGA can probably do it better, more efficiently, probably beat it in most important metrics. But the development getting to that point? If the CPU can physically do it, it's going to be a fraction of the development time, near guaranteed. Add on top of that, data handling, content, usability, things that are an afterthought on a computer are very non-trivial on an FPGA.
Hype, in the sense that it's anything more than a way to hit a higher level of performance at a smaller budget of dollars and/or watts. Which, is a fine goal. But you can have a gazillion more oscillators and way better resolution with a desktop-class processor. None of these new FPGA synths will sound better than Omnisphere - but they will sound way better than the previous generation of hardware synths.
There's kind of a backlash against laptops in music right now - ex: modular and the DAW-less trend - so tons of people want hardware, and plenty of them are yelling "FPGA! FPGA!" like it makes a qualitative difference. Seriously, people on synth message boards are like, "what would this sound like if we did it with FPGA?"
I admit that sometimes there is something intangibly great about using hardware versus a laptop - but it's mostly about the interface, not the sound engine.
Ah, the blockchain of digital music.
Oversampling is a standard technique for modeling analogue synthesizers, but you don’t need to go crazy with it, you don’t need to run your DACs at 24 MHz to see the benefit (you can run them at 48 kHz just fine, Novation’s marketing materials mentioning that DACs “often” have alisaing issues is just weaseling—aliasing issues not hard to solve), and nothing about the specs tells me that they would be pushing the performance of existing DSPs.
So no, FPGA is absolutely not necessary. Audio, even high-end virtual analogue synthesizers, don’t require a lot of computational power. It doesn’t matter whether you have an oversampled DAC at 24 MHz or a regular old 48 kHz DAC, if you send them the same signals you won’t be able to tell them apart.
Keep in mind you can oversample in the digital domain.
Edit: So I envision the ultra high rate sawtooth as essentially a n.m bit fixed point accumulator wrapping around by itself, with the n-bit (4? 8?) top part going over to the DAC as a kind of directly generated DXD format signal.
FPGAs are difficult to program, that’s a pretty steep cost to save you the trouble of downsampling your signal, something which is already pretty damn easy to do.
16x oversampling is fine, DSPs are fine. DXD is snake oil.
Patchbox OS: https://blokas.io/patchbox-os/
If the Novation Peak is running an 8-bit DAC at 24 MHz, they might be making it perform equivalently to a 16-bit DAC at 48 kHz—note the ratios. This is not anything new, it’s just something old that happens to have big numbers in it.
For makers and hobbyists, the power efficiency is less of an issue, in general. But FPGAs may enable a battery powered application where it wouldn't be possible otherwise.
Also, for learning advanced FPGA programming, audio and DSP seem to be a fruitful playground.
If there's an FPGA, why have a general purpose CPU?
"That frees up the FPGA to do audio only."
The problem is that for most synthesizers and music applications, any problem you have is simply not going to require that much computational power by today’s standards.
A good way to make digital music toys is to put all your signal processing on a DSP, and then run your UI and random other code on a standard microcontroller. Microcontrollers are cheap, and this way your real-time DSP code isn’t fighting for memory or time slices with anything else. Some people are asking whether you can just use a Raspberry Pi. I’m sure the hardware is capable of cool things with audio, but I’m not sure that you guys would enjoy wrangling Linux into giving you reliable real-time performance.
Can you point us to what hardware (DSPs or DACs) we should use instead?
Can I use them as digital synths?
Can I use them for digital FX?
If I want to make a digital synth with lower latency than a RPi, what hardware should I use that won't be as hard as an FPGA to program?
If you’re building something one-shot or playing around, RPi is great it’s just that real-time audio on Linux is a pain.
If you want to play with real-time, low-latency effects I would get a Mac or Windows PC and hook it up to a $100 USB audio interface.
If you want to mass manufacture a box that makes noise, at that point it starts to make more sense to research DSPs. They often have weird architectures and the toolchains might not be great. But I suspect that more and more audio stuff is moving to general purpose CPUs and microcontrollers.
High frequency measurement microphones are very expensive. If there was a way to churn a high sampling rate ADC through an FFT inside an FPGA, that would help a lot and it may be simpler or cheaper than for more sophisticated measurements. But beyond basic things like Nyquist frequency I don't really know what I'd need in a microphone and the analog/digital backend...
But alas so many things out there to learn and dive into...
Microcontroller-based audio projects often use i2s for audio. Some don't output anything and just listen.
>2 DAC pins, 10 ADC pins
Pins 35 and 39, 2 channels of audio out.
I work with industrial controllers with analog outs on a daily basis, but when I wanted to build an audio board I had to do a bunch of research to find out what voltage range I needed to output, and what current I'd need to be able to source/sink.
An audio jack is a hardware API contract, basically: plug a wire in here and you will get audio out of it.
I don't understand why people would go so far as to be annoyed by the presence of a jack. Are you annoyed by the LED present on most project boards, if you're planning to seal up the board in an enclosure that means you'll never see it?
To be fair it would add to the BOM and probably impose additional limits on the board design (large footprint, through hole pins) but I'd argue that the beginner accessibility, and the classification of this board as for audio rather than just another assorted MCU/FPGA board, would be worth it.
Of course, the board designers seem to have felt otherwise.
The DAC is only 12-bit.
you can always mix in software.