
Using of Discrete Orthogonal Transforms for Convolution [pdf] - alecdibble
http://iris.elf.stuba.sk/JEEEC/data/pdf/09-10_102-10.pdf
======
thearn4
I had a large interest in discrete convolution operations a few years ago,
particularly in application to digital image processing. Two papers that I
published may be of interest to those here:

[A paper that as a lemma demonstrates the influence of boundary conditions on
the use of discrete orthogonal transformations for fast convolution
operations]
[https://scholar.google.com/citations?view_op=view_citation&h...](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=ZQzlOasAAAAJ&citation_for_view=ZQzlOasAAAAJ:IjCSPb-
OGe4C)

[accelerating convolution operations when a matrix factorization (SVD, DCT for
jpeg, etc.) is already available]

[https://scholar.google.com/citations?view_op=view_citation&h...](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=ZQzlOasAAAAJ&citation_for_view=ZQzlOasAAAAJ:zYLM7Y9cAGgC)

~~~
tacos
Second paper in particular is pretty fascinating. I'm sure you see the
parallels to convolutional neural networks. Thanks for posting.

------
tacos
I'm pretty sure this paper was generated by a bot. Aside from phrases like
"The evaluation of the WHTs product needs only real multiplications by +1 or
1" I see phrases culled from expired patents. Googling pulls up a more
interesting paper from the 70s. This is weird.

~~~
loxias
With the disclaimer that I have not worked through _proofs_ of what is
asserted, I read through the paper and it makes sense to me. Though it _does_
have plenty of editing mistakes, which are somewhat distracting.

The paper might very well be produced by cribbing form other sources, I
couldn't say -- but the central idea, that a convolution can be computed in
some cases by methods with lower computational complexity than the product of
the DFT, is interesting, I look forward to implementing it. :)

(And yeah, the product of two WHT transformed sequences does only need
multiplication by +1 or -1...)

~~~
tacos
Mostly I'm confused why a nearly 15 year old, poorly-written journal article
appeared on HN today with no comments. And I wasn't kidding: I'm familiar with
the state of the art and reading it from the title on down it really felt like
something a bot glued together.

There was some buzz around convolution methods in the late 90s. Lake DSP had a
notable patent which actually inspired some research. Most if not all of these
methods now exist in Matlab and scipy.signal.convolve with far better
documentation.

But I grow weary of random, crappy HN links like this one that really feel
like some enthusiast did a Google search but didn't even read the paper.

~~~
loxias
> Mostly I'm confused why a nearly 15 year old, poorly-written journal article
> appeared on HN today with no comments.

Agreed, some context would be nice, and also agreed that it is poorly-written.
I appreciate it much more when a paper I see on HN has some context from the
submitter.

