I ran into this issue once while trying to render the Mandelbrot set while learning Haskell. It defaulted to a rational type based on two bignum integers with unlimited precision.
With every Mandelbrot iteration* the integers doubled in size, producing exponential complexity. With 100 iterations this essentially means that the program never completes and even runs out of memory at some point.
The newbie-friendly feature turned out to be not so friendly.
What would you use it for? And that's a serious question: what would you use it for, as opposed to just using wolfram alpha or some other service that can already get you all the answers, analysis, and more, without having to mine MathML from random pages on the internet?
In the past, I used a math search engine[0] to find solutions for Olympiad problems, especially inequality ones. I imagine it would be useful when you want to find the name of some formulas or expressions that you came across, though probably not much more.
Semantic MathML is an absurdity, like marking up the tree diagram of all your English sentences, and linking all words to a URL with their dictionary definition. In short, the sort of think only the semweb wonks could have dreamed up.
> However imagine you have a set of keyframes (maybe even multiple fragments per frame) and you need to interpolate between them? Not that hard of a task, isn't it.
Intrestingly, the video artifacts of this model look somewhat similar to those from simple motion interpolation algorithms such as ffmpeg's minterpolate, especially during fast camera motion.
https://ffmpeg.org/ffmpeg-filters.html#minterpolate
I think fallacy 2 is the big one, and not just in the AI field.
Sometimes a simple lookup table (or some other 'dumb' model) works well on tasks that humans generally consider 'hard'.
How do these ToF systems deal with multiple sensors pointing at the same scene? I've seen it work with two ToF sensors, but haven't been able to find a good explanation for how it works.
There are three different methods.
1) TDMA (Time division multiplexing). Some sync system so only one camera is emitting light at a given time. The Microsoft Azure Kinect uses this with the 3.5mm sync cable, this enables up to 3 cameras to illuminate at different times.
2) Frequency domain. iToF cameras use different modulation frequencies, you can set different modulation frequencies and the signals won't interfere. The other cameras photon's will contribute to photon shot noise. Or randomly change modulation frequencies by a small amount during the integration time, which is supported by some sensors.
3) Randomly change timing during integration time, this is more common with pulsed ToF cameras. Analog devices had an example of this in their booth at CES in 2020.
if the measurement time is short enough, (<1us capturing at 60 fps), the probability of interaction is low even with large number of cameras. Even then, some temporal filtering and intelligent time offsetting to separate signals, can usually fix the problem.
If the other emitter/sensor pairs are uncorrelated with our own, and if we integrate over enough transmit/receive cycles, then the other pairs will contribute approximately equally to both of our own two sensor phase detectors. The method descibed here uses the difference between the sensor phase detectors, so any competing interference should cancel out. It will (just) raise the background noise floor a bit.