
Simulating the TS808 Tube Screamer in LTSpice - cushychicken
http://cushychicken.github.io/ltspice-tube-screamer/
======
j4mie
Sorry if this is a stupid question (I don't know anything about LTSpice) but
would it be possible to actually feed a guitar signal through this simulation
and run it out into an amp, in real time?

~~~
jandrewrogers
No, these kinds of tools are not designed for real-time signal processing. It
is possible to build real-time signal processing software that accurately
models physical circuit behaviors, you just need to optimize for computational
throughput and latency to make it practical.

For audio processing in particular, accurately simulating physical circuit
designs is resurrecting a lot of classic hardware in virtual form. CPUs have
enough computational power now to implement very convincing circuit emulations
in real-time.

~~~
BMarkmann
By "convincing", what sort of delay would you be looking at on reasonably
powerful hardware? Enough to fool the ear into thinking it really was real-
time?

As a side question, are the sorts of modeling / math that would be done in a
scenario like this amenable to offloading to a GPU (if the transformations
would potentially be better handled by something like
[http://www.bealto.com/gpu-fft.html](http://www.bealto.com/gpu-fft.html), for
instance)? Is that a dumb question? :-)

~~~
dsharlet
Unfortunately, the math required for circuit simulation does not lend itself
to parallel implementations, whether it be on a GPU or otherwise. I built
LiveSPICE (linked elsewhere in the thread), so I fought a lot with this
problem :)

A small disclaimer on the following, I only attempted simulation of audio
amp/effect pedal circuits, which are generally small. There might be ways to
effectively parallelize (much) larger circuits. Another disclaimer, I worked
on this 1+ years ago, so some of it is fuzzy.

The basic issue is that circuit simulation is fundamentally a serial
operation. The circuit state at time step n is a function of the circuit state
at time step n-1.

The relationship between timesteps is basically a non-linear system of
equations, where the number of variables is the number of nodes in the
circuit. For a typical guitar effect pedal, this might be ~50 variables.

There's basically no opportunity for parallelism here. You can't parallelize
across the non-linear systems, because you don't know what to solve at
timestep n until you've solved for timestep n-1. Within each step, a 50
variable system of equations is probably too small to parallelize, even on the
CPU. Note that the technique used here is generally Newton's method, which
also is difficult to parallelize for the same reason.

The bottom line is that as far as I know, circuit simulation (at least for
audio) is entirely limited by single thread performance.

As a side note, the fun part of LiveSPICE is that even a 50 variable non-
linear system is far too big to solve at real time sample rates (48 kHz, plus
oversampling to avoid aliasing artifacts). The trick is to observe that only a
fraction of these variables are non-linear, most of them are related to the
other variables linearly. LiveSPICE solves for these linear relationships once
during initialization of the simulation, and eliminates them prior to solving
the non-linear system. This turns a typical ~50 variable system into a ~5
variable non-linear system (plus a 45 linear relationships). This is a massive
reduction in computation required (Newton's method involves repeatedly solving
a linear system, which is O(n^3) for simple methods).

~~~
ZenoArrow
How do DSPs get around this problem? What gives DSPs the edge over CPUs for
real-time audio?

~~~
dsharlet
DSPs generally aren't necessarily that much more powerful than a CPU, their
advantages are usually one or more of the following:

\- Special instructions to help with common DSP tasks (e.g. circular buffers,
nice fixed point/rounding instructions, etc.).

\- Much lower power/better performance per watt.

\- Directly connected to audio in/out, to minimize latency.

\- Real time OS, or no OS at all.

However, the main takeaway from my comment should probably be that simulating
audio circuits by modeling them at such a low level (a circuit) is probably
not the right thing to do. There are higher level models of the behavior of
these kinds of effects/amps that are easier to implement and faster to
evaluate. Lots of CPU cycles are wasted simulating inaudible behavior in the
circuit. It's a really fun toy/project, but probably not something you would
want to use to design a widely deployed simulation with.

------
vibrolax
For a circuit meant to be used with a guitar or similar instrument, it is
practical for many purposes to use a much lower sampling rate, e.g. 22 KHz.
The frequency content beyond 10 kHz is quite limited. I have used conventional
LTspice to simulate several vacuum tube guitar/bass/microphone preamps I have
built, for example [1]. I am tempted to try a real-time A-B comparison of the
actual device with a livespice simulation. [1]
[http://www.frontiernet.net/~jff/SonOfSVPCL/DIYSVTBassPreampl...](http://www.frontiernet.net/~jff/SonOfSVPCL/DIYSVTBassPreamplifier.html)

~~~
TheOtherHobbes
As soon as you create non-linearities in DSP you have to oversample to avoid
aliasing. In fact it's usual in DSP distortion circuits to oversample by a
factor of at least 4 or 8. (I"ve seen some devs use 256X oversampling...)

I can't think of a reason why this wouldn't also be needed in an LTspice
model. You can do distortion without it, but if you sample at 22k you're
guaranteed to get that harsh halo around the midrange and high end that
digital is famous for.

Analog circuits don't have the same problem, because anything out-of-band
simply gets rolled off. Unlike a digital system analog doesn't have a Nyquist
limit beyond which frequency components reflect back, so a physical circuit
should always sound smoother.

~~~
vibrolax
Good point, that nonlinear processes generate frequencies not present in the
input. One of the biggest challenges in making analog circuits with good-
sounding clipping distortion is designing proper lowpass and bandpass
characteristics into the coupling between stages to roll off undesired
artifacts.

The advantage in being able to actually listen to a real-time simulation
modeled at the level of resistors, capacitors, inductors, and tubes or
transistors is that one would be able to immediately hear the effect of
component value changes, rather than just see them on the transient or
frequency response plots.

Modeling the system at the level of DSP blocks would be more computationally
efficient, but then one is left to derive the equivalent analog circuit. It's
not rocket science, but it does add a level of indirection.

In any case, being able to simulate 5 triode sections and the linear filters
between them in real time at a 48 kHz sampling on a modern CPU very well might
be good enough for rock and roll. My use case is a design tool, not to obtain
a substitute for a physical device.

