Hacker News new | past | comments | ask | show | jobs | submit login

Timing requirements don't necessarily translate to _speed_ requirements.

Microcontrollers like the Arduino/Atmel are perfect for situations where you need to guarantee that some signal will be asserted exactly 250 (no more, no less) nanoseconds after some other external event, _no_matter_what_.

Since there's usually no OS running on something like an Arduino, you're basically forced to write very deterministic event driven programs. Also, since most (maybe all) instructions on those Atmel microcontrollers execute in a single clock cycle, you can easily dissect your C functions and know exactly how long they will take to execute.

I wouldn't trust Linux for such an application even with a "real time" scheduler.

Linux as a RTOS certainly has it's place, but it's best left for systems that can actually benefit from the added complexity.

For most typical microcontroller projects, the extra features of a full Linux kernel would just get in the way.

Note: There's nothing stopping someone from using a RPI in such a way as well (bare metal), but they are really geared for more complex applications, so there isn't as much support in that area. Also, the hardware is probably a little more complex. Therefore, my statements assume we're comparing an Arduino to an RPI running Linux.




That's where things like the PRU on the TI chips (Beaglebone being one of them) gets interesting:

http://processors.wiki.ti.com/index.php/PRU-ICSS


>Microcontrollers like the Arduino/Atmel are perfect for situations where you need to guarantee that some signal will be asserted exactly 250 (no more, no less) nanoseconds after some other external event, _no_matter_what_.

What is an example of something like that?


I make two products [1] that both use AVR microprocessors (though not Arduino), and both rely on sub-microsecond accuracy. One is a high-speed LED camera flash. The pulse width needs to be exactly the same every time, and so does the latency, otherwise the shot will be missed. The other is a sound trigger for flahes and cameras. That has to both be as fast as possible, but also exactly the same every time. If you're lining up a photo of a bullet passing through a cherry, you need to know that the bullet will be in the same position every time.

Incidentally, I've written a post[2] about high-resolution timers with Arduino. The normal timing functions are not accurate enough for my sort of work, but the nice thing about Arduino is that you can easily drop down to the native AVR code, or even assembly if you need.

[1] https://www.vela.io/

[2] https://www.vela.io/posts/how-fast-is-your-flash


This post made my day! High speed photography prompted my true diving into the guts of computers. It introduced me to C, in very raw way, by my use of the AVR Butterfly demo board. Prior to that I had only tinkered around with Perl/PHP/Java.

I was able to get pictures like this [1] with a $25 microcontroller board (even cheaper now), about $150 in camera equipment, and a $40 pellet rifle. The setup was simple [2], with just a regular camera flash under a plastic bag (cheap diffuser!) and a configurable delay. The firing of the timer was controlled by the AVR's ADC readout of an electret mic. I ended up delaying about 20 milliseconds for the shots I took. [3] The AVR triggered the flash via an optoisolator relay. Pretty simple, I was glad to find out that even just placing a coin across the contacts will trigger the flash.

Now I'm excited to measure how fast that standard flash really was, with your article's advice. And then, for the cherry on top, compare it to my General Radio 1531-AB stroboscope. [4] It was the cheapest externally triggerable stroboscope that I could find on eBay at the time. I got that flash because it can produce strobes as short as 0.8 microseconds! I really want to recreate an old high speed shot and see the difference between the flashes first hand. But college happened and I haven't touched photography in about a decade. I haven't even taken a darn photo with the stroboscope, I've just turned it on and soothed myself with the rapid and loud clicking. :P I hope your post motivates me!

Your Vela One... so tempting...

[1]: https://www.flickr.com/photos/jevinskie/115619170/

[2]: https://www.flickr.com/photos/jevinskie/112758047/

[3]: https://www.flickr.com/photos/jevinskie/albums/7205759408438...

[4]: http://www.ietlabs.com/1531.html


Nice shots! If you do it again, you might find the AVR's analog comparator is a better route than the ADC. I use an external comparator on an interrupt pin in my sound trigger, which gives more flexibility.

Perhaps I can tempt you further with a discount. Use the code HACKERNEWS for 10% off. That's open to anyone else reading this too, and is valid until next week.


Music, for example. If you're doing audio processing for music, it's much more preferable to have something that takes 20ms consistently to transform an input than something that takes 5ms 90% of the time and 50ms 10% of the time (due to, e.g. interrupts). That 50ms gap will show up as a note that's audibly behind the beat, while the improvement from 20ms to 5ms isn't really that beneficial.

EDIT: another example is controlling a UAV. You can design around a system that consistently takes 20ms to process an input (e.g. by limiting the max speed). It's a lot more difficult to design around a component that takes 5ms most of the time, but will randomly take 50ms here and there, because you don't get to control when those lag spikes happen.


For music, it seems like the rt-linux patch ought to be enough. They claim sub-100 microsecond timing jitter for process wakeup (https://rt.wiki.kernel.org/index.php/RT_PREEMPT_HOWTO#Benchm...). That ought to be tiny, if you're talking about audio latency in the range of milliseconds.

For UAVs, you're typically talking to ESCs that want PWM signals in 1-2ms ranges, so 10s of microseconds of jitter would certainly matter for PWM generation, but that's why you'd probably offload that to an external PWM chip, and handle the code that will tolerate 20ms processing times with 100us jitter (like the flight-control loop) on the CPU.

Bare-metal can't be beat for tiny (easily-auditable) code size, and general lack of "what if the timing goes wrong somehow" situations, of course. Plus, who knows if the rt-linux patches would actually perform to that level on a Pi-zero.


My last experience with rt-linux teached me that the kernel is not the only thing that needs to support real-time. As soon as you have any drivers (DMA!) or e.g. Service Management Interrupts (SMIs) that circumvent any smart preemptive scheduler of the kernel you're still screwed. So, what was said is correct, hard real-time is hard, and an OS on top of a "general purpose" hardware platform makes it harder.

That being said, there is stuff like VxWorks, which kind of proves that it is possible, if you have full control over the hardware and the OS.


100μs of jitter is enough to cause major problems if you're bit-banging or doing high-speed closed-loop control. Something like the Bus Pirate is almost trivial to implement on bare metal, but a complete nightmare if there's an OS involved.

RTOSes can be extremely useful, but they're not a perfect substitute for bare metal.

http://dangerousprototypes.com/docs/Bus_Pirate


Well, right, that needs to be a multi-purpose serial interface device, and who knows what crazy timing requirements some sensor or interface chip will need. An example: There are some cheap little radios around. They literally just output a carrier with a binary value encoded on it in a super-simple modulation. Even with an RT device, you've got to keep transmissions very short and include a clock-synch section at the beginning.

You can still transmit and receive if your devices have a lot of jitter, but you'll have to kill your transmission rate to keep things working reliably.


engine management units, camera flashes, LED lights, servo motors and brushless motors to name a few.

Whilst the raspberry pi has the ability generate PWM, you can't guarantee when they run because something else might be scheduled to be on the CPU at the time.

The fancy RGB lights require a precise, complex signal running at ~800KHZ* you can't miss a signal slot because it'll change colour. Its the same for servo motors, although thats more simple as it only needs a static PWM to hold position.

*might be less


I had an temperature and humidity sensor that uses one wire to read and write data from and to it. It has libraries in both Arduino and raspberry pi. The raspberry pi one does much worse and reads data wrong all the times - and given the library was written by ladyada, which is generally high quality code. It is none to her fault - Linux doesn't guarantee the timings - the best that someone could do is to crank the priority to max to get the timings right but even then it is still iffy. Interesting to see, given the raspberry pi has a way higher clock compared to the Arduino.

Note the comment:

https://github.com/adafruit/Adafruit_Python_DHT/blob/master/...

# Note that sometimes you won't get a reading and # the results will be null (because Linux can't # guarantee the timing of calls to read the sensor). # If this happens try again!


Sampling from external sensors for example. If you're measuring something changing over time, you may need very precise timing.


Another specific example would be something like an IRIG-B (DC) timecode decoder where you need to provide a PPS signal that's phase locked to the P0 rising edge or maybe even just display the time. This would be accomplished by implementing a state machine to lock on to the IRIG frame and essentially run a PLL to synchronize one of the local timers. Decoding the IRIG frame would require you to be able to catch every transition of the signal. With a good implementation, you can achieve PPS accuracies to within a few clock cycles (with it being deterministic too).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: