Hacker News new | past | comments | ask | show | jobs | submit | mscharrer's comments login

Some POV-Ray art I created years ago: https://mscharrer.net/povray/scenes/ Source code is always included.


Wolfram works with the full mathematical expression tree. It works very well in practice but the equality test is undecidable in general.


I ran into this issue once while trying to render the Mandelbrot set while learning Haskell. It defaulted to a rational type based on two bignum integers with unlimited precision.

With every Mandelbrot iteration* the integers doubled in size, producing exponential complexity. With 100 iterations this essentially means that the program never completes and even runs out of memory at some point.

The newbie-friendly feature turned out to be not so friendly.

* complex z_{n+1} = z_n^2 + c


> Nothing is mining the web for maths, and semantic markup for maths buys you nothing.

I find that rather unfortunate. A math search engine that find sites with equivalent formulae (or segments) would be quite useful to me.

Of course, that can probably made to work with image alt tags containing latex code.


What would you use it for? And that's a serious question: what would you use it for, as opposed to just using wolfram alpha or some other service that can already get you all the answers, analysis, and more, without having to mine MathML from random pages on the internet?


In the past, I used a math search engine[0] to find solutions for Olympiad problems, especially inequality ones. I imagine it would be useful when you want to find the name of some formulas or expressions that you came across, though probably not much more.

[0] My typical query: https://approach0.xyz/search/?q=OR%20content%3A%24a_%7Bn%2B1...


Semantic MathML is an absurdity, like marking up the tree diagram of all your English sentences, and linking all words to a URL with their dictionary definition. In short, the sort of think only the semweb wonks could have dreamed up.


> However imagine you have a set of keyframes (maybe even multiple fragments per frame) and you need to interpolate between them? Not that hard of a task, isn't it.

Intrestingly, the video artifacts of this model look somewhat similar to those from simple motion interpolation algorithms such as ffmpeg's minterpolate, especially during fast camera motion. https://ffmpeg.org/ffmpeg-filters.html#minterpolate

Edit: I generated an example with strong artifacts. Input: https://mscharrer.net/tmp/lowfps.webm Output: https://mscharrer.net/tmp/minterpolate.webm


Unlike the former, I could also imagine the latter being produced by a Markov chain.


> iv. Never use the passive where you can use the active.

One example i see quite often is: "X has been widely criticized for Y", with no mention of either the critics or the argument.


I think fallacy 2 is the big one, and not just in the AI field. Sometimes a simple lookup table (or some other 'dumb' model) works well on tasks that humans generally consider 'hard'.


Wikipedia has a pretty good article on the banana issue: https://en.wikipedia.org/wiki/Commission_Regulation_(EC)_No....


How do these ToF systems deal with multiple sensors pointing at the same scene? I've seen it work with two ToF sensors, but haven't been able to find a good explanation for how it works.


There are three different methods. 1) TDMA (Time division multiplexing). Some sync system so only one camera is emitting light at a given time. The Microsoft Azure Kinect uses this with the 3.5mm sync cable, this enables up to 3 cameras to illuminate at different times.

2) Frequency domain. iToF cameras use different modulation frequencies, you can set different modulation frequencies and the signals won't interfere. The other cameras photon's will contribute to photon shot noise. Or randomly change modulation frequencies by a small amount during the integration time, which is supported by some sensors.

3) Randomly change timing during integration time, this is more common with pulsed ToF cameras. Analog devices had an example of this in their booth at CES in 2020.


Time division multiplex the ToF signal.

Azure Kinect uses a 3.5mm jack to sync this between sensors.


if the measurement time is short enough, (<1us capturing at 60 fps), the probability of interaction is low even with large number of cameras. Even then, some temporal filtering and intelligent time offsetting to separate signals, can usually fix the problem.


If the other emitter/sensor pairs are uncorrelated with our own, and if we integrate over enough transmit/receive cycles, then the other pairs will contribute approximately equally to both of our own two sensor phase detectors. The method descibed here uses the difference between the sensor phase detectors, so any competing interference should cancel out. It will (just) raise the background noise floor a bit.


Can solve this using polarized light. If course the sensors have to be calibrated relative to each other.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: