
Breakthrough in inverse Laplace transform procedures - pabo
http://inverselaplace.org/
======
jiggawatts
I tried to get the Mathematica version to do something useful, but in typical
"mathematician minimalist style" it squeezed everything into one enormous,
terse, but useless blob. The code on GitHub works as a demo only. Even after
trying a few different syntax variations for about twenty minutes, I couldn't
figure out how to feed the "ILT" function something that would spit out what I
expect.

In case the original authors ever come across this article: There is a
standard for writing Mathematica modules! Please take a look at some other
modules available online, and see how they split the code into small, reusable
functions.

Even the name is too short. In the Mathematica naming convention it should be
called "InverseLaplaceTransformCME[...]" or something like that. Ideally, use
the same calling convention as the built-in function, documented here:
[https://reference.wolfram.com/language/ref/InverseLaplaceTra...](https://reference.wolfram.com/language/ref/InverseLaplaceTransform.html)

This would allow your function to be a drop-in replacement, allowing users to
switch between the symbolic and approximate versions trivially.

You may even want to contact Wolfram Research! They just implemented a new
"Asymptotics" module that includes approximate inverse Laplace transforms as a
feature. See:
[https://reference.wolfram.com/language/guide/Asymptotics.htm...](https://reference.wolfram.com/language/guide/Asymptotics.html)

They might add your approach into the 12.2 release, which would mean that many
thousands of people could automatically benefit from your hard work!

~~~
james412
There must be some name for the effect seen here so regularly, which might
best be described as the contrast in magnitudes of something being posted to
the comments made about it.

In this case we have a fundamental contribution to mathematics that is
succinctly captured in 252 words, producing a 209 word complaint essentially
about whitespace.

~~~
jiggawatts
I can reformat whitespace, if I care to read through a piece of mis-aligned
code, but that's not at all the criticism here. You missed my point entirely.

This is like putting wooden wagon wheels on an F1 race car, and then
complaining that people should appreciate the sleek lines of the car and stop
commenting about the wheels.

The code posted on GitHub is wasting the effort and the talent that went into
this algorithm. It's optimising for brevity, something utterly useless, over
utility, which is essential.

~~~
james412
The race car in this instance would be the paper, it is well presented and
assuming it survives scrutiny, eternal. One particular manifestation of it for
one contemporary system might better be thought of as the garnish on the free
salad bowl placed next to the car

~~~
rudedogg
> The race car in this instance would be the paper, it is well presented and
> assuming it survives scrutiny, eternal.

The parent is pointing out how the code is the only approachable version of
the paper for many people. Making it read like mathematics renders it useless
- since it only speaks to the audience that can already understand the paper.

I fall into this category. If the code had reasonable variable names and
comments I could probably figure out how it works. But since it reads like the
wall of LaTeX on the linked page - I can't pierce it without learning a lot of
mathematics. I think that says a lot about mathematic notation.

------
rollulus
A little ELI5 for those who haven't had Laplace transforms at school, from
someone who only had a Laplace 101 course, so for what's it worth: Laplace
transforms allow you to convert differential equations into easier equations,
and back: the differentials and integrals become multiplications and
divisions. So you can take a differential equation, transform it into the
Laplace domain, manipulate it, and convert it back. And that's cool because
differential equations tend to appear everywhere, for instance to model
springs, electrical circuits with caps and coils, the surface of a soap bubble
in a metal rod, etc. A sibling is the z-transform, which is like the digital
version. This one is used for instance to design digital audio filters. I'm
sure some math wizards here can elaborate and correct me.

~~~
kpmah
I've had some problems understanding the Laplace transform. Maybe somebody
here can point me towards some material.

I have an interest understanding how IIR filters are designed, and I always
get stuck at this part in DSP books. The Laplace transform is used, but as
well as finding the mathematics difficut I don't really understand why it is
being used at all. I think it is trying to replicate the effect of an analog
circuit?

~~~
Gibbon1
You can describe a circuit by it's time domain behavior. Or you can describe
the circuit by it's frequency domain behavior. Both are valid and congruent.

The thing is a lot of questions are easy to answer in the frequency domain.

For instance, you want to know if a circuit with feedback will oscillate. Hard
to answer using time domain equations. But in the frequency domain there is a
simple constraint. If for all frequencies where the the gain is greater than
one the phase shift is less than 180 degrees, circuit won't oscillate. This is
obviously rather useful.

Also a point with a lot of 'books' the authors get caught up in describing how
something is done that they never explain why something is done. I've found
often the answer is simple yet opaque and frustratingly never talked about.

~~~
pantulis
This. I remember having adequate cursory knowledge of Fourier Transform to the
point of understanding the value of FFT algorithms, but the Laplace Transform
was explained like hell so I failed my robotics classes.

~~~
TheOtherHobbes
If you have an electronic circuit, you can model each element with a
differential equation. E.g. voltage across a capacitor is modelled as the
integral of current, voltage across an inductor is dI/dt.

This is a useful fact for a simple circuit in a classroom, but the
differential equations for any circuit with more than a few components soon
become insanely complex.

With the Laplace transform you (more or less) replace an integral with 1/s and
a differential with s, plus some constants derived from the component values.

Then you can simplify for s, and use the Inverse Laplace Transform to convert
the final expression in s into an expression in t.

You have now solved an insanely complex differential equation with some basic
algebra, and your final expression in t - with component constants, and some
exponentials that appear after the inverse transform - accurately models how
the circuit responds over time.

There's also a related fairly simple trick for converting the s-domain
representation into a frequency/phase plot which tells you how the circuit
operates in the frequency domain.

And another related fairly simple trick for converting the continuous s-domain
into the z-domain for DSP calculations over a sampled time series.

Because the same theory also applies in other domains - spring/mass systems,
and so on - you can use the same technique there too.

~~~
Gibbon1
Yes this very good. As it the point that restating the problem in a different
domain is a very common way to make a problem tractable.

Examples

Converting numbers to logs allows you to multiply and divide by mere addition
and subtraction. If you wonder why RF engineers represent power in db this is
why.

Mapping an equation in terms of forces integrated over a path to one using
vectors and energy.

------
cesarosum
rollulus gave a good summary of Laplace transforms and what they do. For some
more context, they appear regularly in applied probability (e.g. finance,
insurance, physical models including dams). A typical problem is dealing with
sums of non-negative random variables. Let's say you want the distribution of
n independent copies of a non-negative random variable with distribution
function F. The hard way is the n-fold convolution or essentially evaluating
an n-dimensional integral. The easy way is using the Laplace transform of F
and simply raising it to the power of n.

The result isn't always invertible analytically, but you can almost always
invert it numerically and this is why techniques like the one outlined in the
paper are so important.

This is a fantastic post and I thoroughly recommend reading it and the 2019
paper that summarises all their work for several reasons:

1\. Very clear exposition of previous work and their own.

2\. Clear evaluation metrics.

3\. They've even made it easy for you to replicate their work and results.

------
dls2016
I've never understood the use of the Laplace transform. Perhaps that's due to
my mathematical exposure (theoretical qualitative analysis of pdes). Since the
Laplace transform lacks the duality of the Fourier transform, it doesn't seem
to have a place in research mathematics. But I probably think of a dozen
fundamental uses of the Fourier transform, from Bourgain spaces to evaluating
oscillatory integrals. And if you're working on some manifold with curvature
then you generally need to be familiar with the eigenfunctions of the
Laplacian on that manifold... not the basis of the Laplace transform.

I also know a bit of signal processing\numerical analysis, and I'm not
familiar with any practical uses of the Laplace transform there. I don't
believe it's used in the numerical solution of pdes or odes, whereas spectral
methods are a huge area of study and (until recently, I think) were used in
the GFS weather model. And most time series analysis tools either apply the
Fourier transform or bail out of this approach and use statistical tools.

My version of Greenspun's 10th rule goes: any sufficiently complex program
includes an FFT.

Can anyone help me out here? Is there a problem/theorem the Laplace transform
solves/proves which the Fourier transform doesn't?

~~~
teleforce
For a good explanation on Laplace Transform please check the video
presentation link that I've provided in my other comments.

As for the Laplace Transform, it is mainly use in control system applications
where the input/output include transient/damping/forcing signal waveforms (on
and off unit circle) not only clean steady state signal waveforms (on unit
circle). This paper provides a good overview of the sample usages of a Laplace
Transform in Electrical and Electronics Engineering [1].

If what you meant by the duality Fourier Transform as FFT/IFFT, Laplace
Transform has the equivalent in the form of Chirp-Z Transform (CZT) and
recently discovered inverse CZT (ICZT), and the original HN discussions link
of the discovery is also provided in my other comments. For CZT/ICZT potential
useful application please check the other/older HN topic comments in [2].

Perhaps we should just wait and watch for the torrent of patent filings on
this CZT/ICZT topic if the claim of ICZT is really true and feasible.

[1][http://sces.phys.utk.edu/~moreo/mm08/sarina.pdf](http://sces.phys.utk.edu/~moreo/mm08/sarina.pdf)

[2][https://news.ycombinator.com/item?id=21232296](https://news.ycombinator.com/item?id=21232296)

~~~
dls2016
I see your point about stability analysis.

I will take a look into CZT, I recall the HN post at the time but didn't look
into it much.

------
dzdt
This is an interesting promotion of an applied math result. From their
promotional material it looks promising, though the unusual promotional
approach makes me worry.

The actual paper is at
[https://www.sciencedirect.com/science/article/pii/S016653161...](https://www.sciencedirect.com/science/article/pii/S0166531619302457/pdfft?md5=7de20355de7ca3e047e645006fa812f2&pid=1-s2.0-S0166531619302457-main.pdf).
This is a pretty obscure journal. The paper is pretty "soft" \-- lots of
numerical testing of their approach vs. other well-known approaches and not
very much theoretical analysis of convergence rates or such.

The main claim seems to be that their approach has better numerical properties
for discontinuous functions and that it can be effectively implemented to high
order using double precision arithmetic.

~~~
JorgeGT
Why do you say they are not associated? They seem part of a research group of
the Technical University of Budapest, looking at the papers affiliations.

As a researcher I really appreciate the promotion effort, some years ago I
came across a similar "landing page" for a numerical technique that helped me
a lot:
[http://people.ece.umn.edu/users/mihailo/software/dmdsp/](http://people.ece.umn.edu/users/mihailo/software/dmdsp/),

Trying to put together how a new numerical method works scouring for papers
with different nomenclatures, different sets of authors, different
implementations etc. is often a huge pain. I wish these "landing pages" became
a standard, or that a standard repository for them became available. Something
like, this is our technique, these are the relevant papers, and here is some
demo code.

~~~
pabo
The link you provided does not work. Could you fix it?

~~~
JorgeGT
Uhm, works for me, maybe search «DMDSP – Sparsity-Promoting Dynamic Mode
Decomposition»?

~~~
pabo
Ok, I guess there was some temporary issue, it's working now.

------
hilles
Hi! One of the authors here.

Thank you for the exposure and feedback! We really appreciate it.

About the code. We have added comments and simple running examples to the code
on github. Hopefully that helps make the code more accessible to everyone.

About the contribution. Classic numerical inverse Laplace transformation
methods work in some cases but fail in others, while the CME method always
gives a good approximation at low computational cost. We recommend it for
general use when you just want to invert a function numerically without
spending effort to figure out what methods might be applicable.

~~~
no_identd
This story is six days old. Don't expect a lot of replies to your comment to
come, but know that your comment & more importantly the improvements (and ofc
the result!) get appreciated.

------
nisuni
Inverting the Laplace transform is a central problem in computational physics,
since it connects imaginary-time results (easier to obtain numerically) to
real-time response.

Over the years a number of approaches have been developed for the inverse
Laplace transform, such as MaxEnt, GIFT and many others.

I would love to see how this new approach fares against those.

------
turbinerneiter
This is how papers should be, link to GitHub, interactive demo. Awesome stuff.

~~~
peterburkimsher
I feel like this is what Tim Berners-Lee imagined the World Wide Web to be:
sharing knowledge and research with interactive media and hypertext, instead
of printed papers. It found new applications outside academia, but this site
is probably close to the original idea.

------
signa11
for an introduction to the whole topic of laplace-transforms: the venerable
3b1b
[https://www.youtube.com/watch?v=6MXMDrs6ZmA](https://www.youtube.com/watch?v=6MXMDrs6ZmA)

i found this ([https://johnflux.com/2019/02/12/laplace-transform-
visualized...](https://johnflux.com/2019/02/12/laplace-transform-visualized/))
to be pretty cool as well.

------
MaxBarraclough
Excellent presentation of the material.

Not really my, well, domain (sorry), so my only contribution is that there's a
spelling error in the dropdown: it refers to the Heaviside step function as
the 'Heavyside' function.

------
GolDDranks
Just spitballing here, about an application of the Laplace transform. We have
a product that allows the users to use machine learning in a semi-automatized
way, without deeply understanding hyperparameter optimization, model testing,
selection and evaluation and such.

There was some talk about supporting the prediction of time-series data. I
have absolutely no knowledge of how time-series data should be pre-processed
and what kind of algorithms are common or applicable in general. (I'm not in
charge of the R&D of the data-science-y features) However, it seems like
Laplace transform as a pre-processing step ticks a lot of the checkboxes. As a
superset of Fourier, it supports periodic changes in time series, and being
about exponentials, it also allows for growth (or decreasing) over time,
allowing to transform a time series to data that is more applicable to
classical ML algorithms.

Is Laplace transform actually used for such usecases?

~~~
cogman10
IDK, but Fourier, and specifically the more specialized, the DCT certainly is.

Part of the reason for this is because the algorithms to go from discrete data
points into a wave form are fairly well known and fast.

DCT is the foundation for most Lossy encoding formats. Using it for time
series data makes a lot of sense, especially if you are optimizing for storage
space.

------
nabla9
If you are familiar with Fourier transform, then

Fourier transform: sinusoidals

Laplace transform: sinusoidals + exponentials

Here is nice video explaining it:
[https://www.youtube.com/watch?v=n2y7n6jw5d0](https://www.youtube.com/watch?v=n2y7n6jw5d0)

------
nebukadnezar
How does this new algorithm compare to
[https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1355451](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1355451)
?

------
credit_guy
Here's a little ELI5 about the Laplace and inverse Laplace transform, and why
the inverse transfrom is fiendishly difficult, and therefore why this result
is extraordinarily important.

Imagine you win the Megabucks lottery. The win is one hundred million dollars.
You go to claim your money, but you are told you can choose between the full
amount given in monthly payments over 20 years, or a lump sum. But the lump
sum is not the full $100MM, it is the present value of the monthly payments
discounted at a rate of 5%. To discount an amount received 10 years from now
at the 5% rate, you simply divide by 1.05 ^ 10, which is very close to
exp(0.05 x 10). If you actually calculate this present value using the
exponential function, you say that you use "continuously compounded rates".

So, for any stream of future cashflows one can calculate the present value by
multiplying the cashflows with appropriate discount factors (of the type
exp(-r t)) and adding them up. For different discounting rates r you obtain
different present values. This present value as a function of r is the Laplace
transform of the cashflow stream as a function of t.

The inverse Laplace transform is solving the riddle: if I tell you the present
value (PV) of some cashflows for any (positive) discount rate you want, can
you calculate the cashflows?

Why is this a difficult problem? Because it is "ill-conditioned". Imagine the
following two cashflow streams: in the first you get $1MM every year for the
next 10 years and another $1MM one hundred years from now. In the second you
also get $1MM annually for the first ten years but the last $1MM is 101 years
from now. For a zero discount rate the value of both cashstreams is $11MM. For
a 5% they are both around $9MM and different by about $300, which is about
0.003%. For any discount rate the PV's will be very very close.

In some cases "in real life" this closeness could be below machine precision
level. If someone gives you 2 sets of inputs where their Laplace transforms
are different by less than the machine precision levels for all values of the
discount rate, then there is no hope to tell them apart knowing only their
trasforms only, at least not if you don't use some multiple precision
libraries.

That should give you an intuition why the inverse Laplace transform is nasty.
All hope is not lost though. First of all, in a typical application the
Laplace transform of a function is known in closed (analytical) form, so you
can actually use multiple precision libraries if you so wish. I have seen
cases where people were using precision of 2000 digits in Mathematica for
this. It's slow as hell, but it gets the job done.

Separately, you are free to calculate the Laplace transform at any "discount
rate", including complex values. If you are smart about how to choose these
values, you can come up with good recipes for the Laplace transform.

For hundreds of years now, the general wisdom was that various inverse
numerical Laplace transform algorithms have strengths and weaknesses, but no
single one is universally good.

Maybe this one will be, and if so it will be indeed revolutionary.

------
punnerud
One of the big breakthroughs is Machine Learning/Neural Networks (NN) is to
use the derivative of the error to update the weights of the network
(backpropagation). Thinking if CME could be used to avoid local min/max in
some way, to speed up the training process.

------
vmchale
this is a lot of energy for an inverse laplace transform

------
The_rationalist
What could be the use cases? Also visually it seems so simple, like for the
Mish activation function
[https://github.com/digantamisra98/Mish/blob/master/README.md](https://github.com/digantamisra98/Mish/blob/master/README.md)

It seems to be advanced maths but I wonder why the designer (he already know
in advance the desired form of the function in order to give him a desirable
property (in both cases being more smoothed / continuous / centered)) does not
draw graphically the desired function and let a software solve, find
automatically the best approximation of the function?

EDIT: well it seems to be a general function approximator so my point doesn't
apply (but still apply for the new activation functions in machine learning)

