Hacker News new | comments | show | ask | jobs | submit login

    > Embedded systems don't have the resources!
    > ...
    > I'd still use a binary log storage, because
    > I find that more efficient to write and parse,
    > but the indexing part is useless in this case.
This is yet again a case of a programmer completely misjudging how an actual implementation will perform in the real world.

When I wrote the logging system for this thing http://optores.com/index.php/products/1-1310nm-mhz-fdml-lase... I first fell for the very same misjudgement: "This is running on a small, embedded processor: Binary will probably be much more efficient and simpler."

So I actually did first implement a binary logging system. Not only logging, but also the code to retrieve and display the logs via the front panel user interface. And the performance was absolutely terrible. Also the code to manage the binary structure in the round robin staging area, working in concert with the storage dump became an absolute mess; mind you the whole thing is thread safe, so this also means that logging can cause inter thread synchronization on a device that puts hard realtime demands on some threads.

Eventually I came to the conclusion to go back and try a simple, text only log dumper with some text pattern matching for the log retrieval. Result: The text based logging system code is only about 35% of the binary logging code and it's about 10 times faster because it doesn't spend all these CPU cycles structuring the binary. And even that text pattern matching is faster than walking the binary structure.

Like so often... premature optimization.




I've worked with a number of implementations, both embedded and others (ranging from a PC under my desk, through dozen-node clusters to ~hundred nodes). For most cases, binary storage triumphed. Most often, we kept text based transport.

Again, transport and storage are different. While I prefer binary storage, most of my transports are text (at least in large part, some binary wrapping may be present here and there).


Cool tech you have there, but I only understood it once I saw the video. You basically have a very fast laser that can do volumetric scans at a high framerate, did I get this right? What do people typically use it for?


    > You basically have a very fast laser that
    > can do volumetric scans at a high framerate,
    > did I get this right?
Sort of. The laser itself is constantly sweeping its wavelength (over a bandwidth of >100nm). Using it as a light source in a interferometer where one leg is reflected by a fixed mirror and the other leg goes into the sample something interesting happens: The interferometric fringes produced for a certain wavelength correspond to the spatial frequency of scattering in the sample. So the fringe distribution over wavelengths is the Fourier transform of the scattering distribution. So by applying an inverse Fourier transform to the wavelength spectrum of the light coming out of the interferometer you get a depth profile.

Now the challenge is to get the wavelength spectrum. You can either use a broadband CW light source and a spectrometer. But these are slow, so you can't generate depth scans at more than about 30kHz (which is too slow for 3D but suffices for 2D imaging). Or you can encode the wavelength in time and use a very fast photodetector (those go up to well over 4GHz bandwidth).

This is what we do: Have a laser that sweeps over 100nm at a rate >1.5MHz and use a very fast digitizer (1.8GS/s) to obtain a interference spectrum with over 1k sampling points. Then apply a little bit of DSP (mapping time to wavelength, resampling, windowing, iFFT, dynamic range compression) and you get a volume dataset.

BTW, all the GPU OCT processing and visualization code I wrote, too.

    > What do people typically use it for?
	
Mostly for OCT, but you can also use it for fiber sensing (using fiber optics as sensors in harsh environments), Raman spectroscopic imaging, short pulse generation and a few other applications. But OCT is the bread and butter application for these things.


Alright, gotta say, that's cool.

Frequency-sweeping... How are you doing that? Is the laser itself able to frequency sweep? Or are you chirping pulses?


    > Frequency-sweeping... How are you doing that?
The basic principle is called FDML; there's a short description of how it works on our company website:

http://optores.com/index.php/technology/5-fourier-domain-mod...

A much more thorough description is found in the paper that introduced FDML for the first time:

https://www.osapublishing.org/oe/abstract.cfm?URI=oe-14-8-32...

    > Is the laser itself able to frequency sweep?
The laser itself is doing the sweeps.

    > Or are you chirping pulses?
No. In fact one of the PhDs that came out of our group was generating pulses by compressing the sweeps:

http://www.nature.com/ncomms/journal/v4/n5/full/ncomms2870.h...


Interesting.

What you're doing sounds a lot like time-domain spectroscopy in an odd sort of way.

What are the advantages of this versus just chirping a pulsed supercontinuum source?


    > What you're doing sounds a lot like time-domain
    > spectroscopy in an odd sort of way.
The measurement principle is definitely related to TDS.

    > What are the advantages of this versus just
    > chirping a pulsed supercontinuum source?
Output power: Our system can emit >100mW

Sweep uniformity: The phase evolution of the sweeps is very stable; the mean deviation in phase differences between sweeps is in the order of millirad. Which means that for the time→k-space mapping the phase evolution has to be determined only one time and can then be used for hours of operation; in fact the system operates to repeatable that even after being powered off over the night, the next morning you can often reuse the phase calibration of the previous day. Without that, you'd have to use a second interferometer and sample a k-space reference signal for each and every sweep in parallel and use that for k-space remapping.

Ease of synchronization: Trigger signals have very small jitter. Also the jitter between electrical and optical synchronization is in the order of few ps, which is important for things like Doppler-OCT.

Coherence: Supercontinuum Sources have issues with coherence stability, which degrades the imaging range.

Sentisivity issued: Chirping Pulsed Supercontinuum Sources (which are actually used for OCT) is challenging. It requires a lot of dispersion. High dispersion means a lot of loss, which in turn means it requires another output amplification stage, which in turn will also produce significant optical noise. And optical noise is the bane of OCT, since that reduces the sensitivity. In contrast to that if properly dispersion compensated an FDML laser will exhibit very little noise.

Price: Pulsed Supercontinuum Sources suitable for chirping and OCT applications are quite expensive. Our laser is not cheap as well, but it's still more price effective.


All good answers. Thank you.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: