Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Current state of programming embedded devices?
44 points by tlubinski on Apr 20, 2016 | hide | past | favorite | 71 comments

I recently switched to a hardware company and we are working on a medical device. We need a nice, touch-based user interface on a 5"-10" inch display. Some math calculations in the background, but no realtime stuff. I am an experienced developer on different platforms but I never programmed an embedded device.

What is the mainstream OS on embedded devices right now? Linux, Android, Win 7 Embedded? What about Windows 10 IOT?

What is a good setup? Rasberry Pi + official touch screen? Are there medical grade devices out there (doesn't matter if it is expensive)?

What about developer tools? Okay, Visual Studio for windows? But what about the other platforms? Still C?

You see, I'm pretty clueless, so any help is much appreciated!


I've done a few safety-critical embedded devices for medical equipment and an air traffic control system. We used Linux/C so we didn't kill people.

The combination of "medical device" and "touch-based user interface" started me twitching. Some things to bear in mind are (1) will the interface work if the users are wearing rubber gloves? (2) there could be a problem with an infectious agent transferring from a patient to a doctor's gloves to the screen then to another doctor and another patient.

At my workplace, we've tested capacitive touch panels with rubber gloves. Cleaning the device could be an issue, but is more general, since the same thing could happen with regular pushbuttons.

But I certainly defer to your expertise. These are the subtle issues that us non-medical guys overlook.

There are clearly a lot of people very informed people in this thread, and a lot of goofs who have never gone through the FDA validation process (or what seems to be any other validation process other than Heh My Node.js unit tests pass!) here. Some people are recommending RasPi and holy shit someone recommended JavaScript?! Who's going to be the one who tells lil' Tommy his grandma died because her dialysis machine confused === with == ahaha. Someone suggested LISP and Go? Holy fucking god. Good rule of thumb -- immediately disregard any advice of those advocating any platform that's advocated by the "hacker" or "maker community" unless those guys do embedded for a living and have graduate degrees (Nohl, Bunnie, Tarnovsky).

Like @Matthias247 said, QNX and GreenHill are easy to get up to speed from other RTOS, well documented, and arguably most importantly has name recognition so it'll be cheaper to get into hospitals, easier to get through the FDA certification process, and easier on your chequebook when you get your liability insurance. @viraptor offers good advice as to why RPi's are a dumb dumb dumb idea to base your product off of (who knows when they'll EOL their product?). Every component on your BOM should be from a reputable vendor who can tell you with the utmost confidence how long component X will be in production, the lead times typically seen, etc. @xzion did class B/C devices and mentioned EOL'ing so take his advice too. I'd even PM him and offer him a chunk of change to audit your unit before cert testing. (5k for someone to catch that EM your engineer didn't because his H-probe was oriented in the wrong direction is worth every cent). As a manager, I'd actually keep a resource on retainer who has taken an FDA embedded product to market as a consultant for my engineer to use as much as he wants, regardless of the cost (presuming my engineer hasn't done so himself, in which case, yay!). @analog31's definitely got good advice for his field but an FDA class C's requirements are entirely different from scientific equipment. Your $300k Keysight network analyser or $700k SEM might be okay to prototype against, but when my scope crashes a tech comes out and fixes it the next day- people don't die. When my Illumina sequencer fails, a post-doc might lose a week or two of work, bummer man, there goes his JAMA article, but no big folly. You can shop around a proof of concept for kit like that, (and likely have them take delivery of your rev 1 based on that same platform!) but I'd be very very concerned about EOL'ing anything on my BOM.

Make your BOM as jellybean friendly and as reliable as possible from the best vendors you could possibly afford. I strongly advocate ARM's for the following reasons. 1) IDE - Keil's MDK is basically the best IDE I've ever used (and I've used a lot). I can't even begin to delineate the functionality of it, just download the demo. It's better than having a bond-out chip and an in-circuit emulator with live debug. Really, it's crazy good and it's "free" (lacking some functionality) because ST subsidized it as a loss-leader (the first hit's always free...) 2) CMSIS-RTOS has a not-too-steep learning curve such that you could swap in any given engineer who's worked with on QNX4 and jump right in (for me, the learning curve was about as painless as Java 1.6 -> C#2 => ~2 weeks). Works on any M3 or higher (you aren't locked into any specific vendor), even licensed out with everything the IDE is reasonably priced as is the OS. So you don't run into the "Jane Street OCaml / Haskell shop hiring problem" of having a few pool of very very talented applicants, who are all in high demand as a result of their specific skillset. A good engineer won't have a problem. 3) Again, no specific vendor! If ST goes out of business (haha yeah right), NXP (a division of Philips) would definitely supply you with a same chip, with the same pin out, same package, same everything. Model everything on your BOM with a contingency in case lead times surge unexpectedly.

There are niche RTOS solutions out there (FreeRTOS works on ARM with a modicum of ST documentation. ST, Mentor, NXP, and all the other name-brands use it as the defacto standard if you're not going to go with a 'name-brand' RTOS.) eCos is probably the second pick with more of a hacker-community vibe to it, which, is probably good for getting that roto-copter's altitude up an extra few hundred feet but not really something you want your dialysis machine to be running on.


Here's an idea of what you're looking at (this is based on fed work from an engineering then from a bean-counter POV, but I've taken a Class B to market on the software side and it's got enough parallels that I feel this advice is pertinent).

Your cost breakdown is for first-product-to-market going to be 40% NRE for all the licensing, a board design engineer for your board layout (Cadence OrPCB / Altium 16 license), a software engineer with embedded experience (i.e., a good indicator is he's taken a lot of EE classes, or on the other end of the spectrum, has gone to the graduate level in theoretical computer science -- had great experiences with employees that fall into those categories) + Keil IDE or whatever, industrial engineer for your enclosures (Solidworks license) [you _might_ be able to bypass this step, but purchasing authorities ask "Can we change this font color" more often than "how efficient is it" - humans are significantly influenced by the aesthetics/how something interfaces with you. I'm guilty of this too, any industrial IP67 gear that feels chinsy, any relay that feels light, etc, I have a preformed biased against. (Aesthetics are a large part of Apple's value, along with exclusivity and status signalling; though, when I had my Macbook Air it felt like I could bend it in two, it's been on my desk for 3 years, and my daily driver is a Thinkpad I've neglected so much I'd have gone through 3 or 4 MBP's by now)

Legal and insurance will predictably be a fixed recurring (maybe 15%). Certification and re-spin costs can range from 5% to a lot more. Don't skimp on stupid things to save a few k here and there - pay a firm to do pre-EMI testing beforehand. Buy brand-name everything just as an insurance policy. $9 dollars extra for that precision thick-film resistor @ 10k vs $.13x10 might save you 8.70 a unit, but the engineering time it takes to find why your chip is clocking out spurious data but only once every 3 weeks at random times (oops, those resistors were 10% not 1%, which is normally fine except when you bring CE low for > x NS while doing Y, which heats up the bypass cap near Vcc increasing the ESR, and whoops, your cap's resistant factor is drawing an excess of current that shoulda went to Vcc! Again, Keil IDE helps so so much finding bugs like that.) Uh, yeah so, buy Nichicon and breathe easy. Use pre-certified industrial components that your competitors have already used if you can.

Office space can be finagled from other startups usually at pennies on the dollar. The commercial real estate market even in NYC is soft and I could get class-A space in midtown month-to-month at $3/sqft/mo (utils and furnished). I'd imagine SF is the only place where you'd have difficulty finding a reasonably priced space from someone who wants to let go of 800 or 1000 sqft. The rest is going to go to your marketing and/or sales team who will ultimately make or break you. Not for the faint of heart. If you don't have a mentor/family friend/guy on your board with existing contacts or know someone on the Cerner/EPIC/Meditech sales team with a big-fat Rolodex, you're going to be in for a tough, tough time. In government contracting (SBA, tiny contracts, fighting over the scraps really), we used to keep former decorated career military men on our board. They'd pocket 180k a year to literally fly down once every quarter and play golf while we got access to his Rolodex- medicine's the same.

very sound advice, thank you!

Thanks for your input. (1) resistive screens and some capacitive screens can handle rubber gloves fine. (2) for our use-case (non-invasive blood sugar measurements) this is a non-issue, but out of curiosity: how is a touch screen different from a physical button (or the whole case) in this regard?

Have a look at panels on devices like ECG. Usually these don't even have real buttons exposed. A common approach here is to make the whole device as smooth as possible so there are no not-cleanable holes / ridges. For example displays will be actually behind transparent part of the case, not sticking out / in a hole. Same with buttons - all of the actual switches will be behind pillow-like buttons built into the case.

See http://g02.a.alicdn.com/kf/HTB1fNYoHVXXXXbkXFXXq6xXFXXXt/Fre...

Edit: also when considering industry / medical devices, keep in mind that most touchscreen are not meant for the conditions these devices are in. You should be able to splash water / gel / oil / ... over it and just wipe it afterwards. They should also survive people who think they just need to press it harder if it responds slowly. Simple buttons (with extra protective cover) are more likely to survive this. Next time you're around in the hospital, check out how many of the button labels are completely worn down.

It should also be noted that a common practice in medical environments is to cover any control panels with a static-cling plastic sheet (usually light blue color, I imagine everyone is using the same brand) that can be quickly replaced between patients. So make your legends large and high-contrast.

We used Linux/C so we didn't kill people.

I'm not sure that C the best language for life critical high level applications... it's far too easy to shoot yourself in the foot.

What alternative would you recommend?

Pretty much anything!! Java, C#, Rust, Go, OCaml, Lisp, ... even C++ would be a much better choice.

Check this out: http://tobyho.com/2008/03/30/how-safe-is-your-programming/

There also used to be a website that ranked programming languages according to various parameters (which is faster, which is safer) by asking visitors pairwise comparisons (Which is faster - Matlab or C?). I was going to link it but can't find it unfortunately. Anyway I'm pretty sure C would come second-last in the safety ranking, right above assembly.

Ada, frankly.

I work in scientific equipment, and there is a huge trend towards devices containing a touch screen with some amount of computing power. I take a look at what competitors are using, and have opened up some gadgets. The two main things for newer equipment seem to be:

* Android, when the product offers a host of mainstream computing features. I think the rationale for going with Android is: "The user already knows how to use it."

* Linux, when the product is cheaper, and has a more bare bones or restricted function set. I don't know for sure, but I think that a Linux build can be more stripped down. Little boards like Raspberry Pi and BeagleBone abound, and if nothing else, you can create a nice proof-of-concept with one of those. I've actually done this, writing my code in Python, with the expectation that the "real" programmers would turn it into something else.

I'm not in the skill areas that actually develop this stuff, so I don't know any lower level details. My job tends to be supplying the theory of operation for the actual measurement.

Both systems use SOM's (system on a module) that are typically sold with the software tools to get you started, such as a working demo OS. It is rare to see real-time stuff done on the main computer. Since you're usually talking to some sort of device hardware anyway, it's usually more convenient to put the real-time stuff on a separate microcontroller, that's got little or no interrupt overhead to deal with.

Yep, I bought a SOM for testing. It can run Linux, Android 4.4 and Windows 7 Embedded. Windows was pre-installed so I gave it a test-run and it looks and feels very 1990s. Hence my question.

There was a previous generation of products that used embedded Windows for the same reason: "The user already knows how to use it," plus there was the hope of being able to transfer Windows programming skills and tools to the embedded target.

I don't want to sound too negative but If you have moved to a hardware company then the resident Electronics Engineer (EE) should be determining much of the embedded hardware configuration.

Sure the choice of CPU architecture, OS, GUI Widgets etc will/should involve you but hardware choice is determined by user requirement, gui, reliability (watchdog timers, failure modes), power requirements, BOM Costs/long term availability, IP Ratings, enclosures, emi, electrical safety etc and is a totally different discipline which you need an EE and can seriously backfire and not get approval if not done correct.

Saying all that working as a programmer in a hardware business will teach you a lot of the above and give you a bigger influence in future projects.

Thanks, but the Electronics Engineer will ask you what you'd like to run on this device and then pick a suitable piece of hardware. Cost is not our biggest concern.

Depends on what reliability / response time / etc. you need. The requirements you list are not very specific. You should also look at what interaction you need and what kind of regulation applies to your device (since it's a medical setting, this may influence your choice).

You can potentially go with bare microprocessors and something like http://femtoos.org/ to just small ARM devices with windows. You can do everything on one chip, or split the device drivers onto separate chips and drive them over I2C where specific timing is crucial.

There are many options - I'd start with listing exactly what you need to support (display, io, inputs, timing, single/multi chip, what timing / other guarantees, hardware watchdogs, failsafe states, etc.) and then choose hardware/software based on that.

If you want to produce many devices, I'd stay away from consumer devices like RPi. Simply because you may get stuck with a supply shortage. You can pretty much always order 1k of popular chips, even if it's a shipment from Asia. You may not be able to do the same for a popular RPi model. Also some RPis will get phased out / discontinued at some point and for a medical device you may need to recertify the new model before release.

I work at a medical device consultancy, but we're mostly developing custom Class B/C devices at the bare metal level. It sounds like you'll be developing a Class A device and cost isn't a huge issue so I'd probably recommend going the Android route. Find a reputable supplier who's worked with medical devices before, and check their availability guarantee (since the production lifetime of most Android devices is short).

Class A basically means follow common sense, so as long as document your architecture/design, use source control and test critical parts of the code you should be right to use whichever Android development framework you want.

I work at a company that's in the food industry.

Currently, we're running Linux with Qt 4 on PowerPC.

We're moving to a newer CPU (i.MX6 quad core). Freescale announced that they would continue manufacturing and supporting it for 15 years and it comes in industrial grade and consumer grade packaging depending on your needs.

We're moving away from Qt for the GUI. HTML5 is the way of the future. ;)

For people who are starting out, I'd recommend using Yocto. It's a build/development environment for embedded Linux. Look it up.

Edit: By they way, if it's not real-time critical, there's nothing stopping you from using any programming language you like except how difficult they can be to cross-compile.

For hard real-time stuff you need a dedicated rt system and you'll have to use C, C++ or maybe Rust.

For soft real-time stuff you can get away with RT-patched Linux. Any language with GC is generally bad with the exception of Erlang, Go and Nim; where the GC is optimised for low latency rather than bulk through-put.

Why are you moving away from Qt?

A few reasons:

* It's easier to find people with JS skills than C++ skills

* It takes ages to build Qt and it's unreasonably hard to compile. Building the Linux kernel is far less painful.

* It takes ages to build a single Qt project and we have multiple architectures to deploy to (ppc, arm, x86).

* Remote connections are trivial. People only need a browser to connect remotely.

The people at the company who have been using it like it a lot. They just have to put together a few tags and boom there's your GUI.

The GUI has access to some system variables via WebSocket. We have a two-way binding abstraction so people don't have to think too much about events and such.

I wouldn't recommend HTML for a small company though. Better stick to Qt if you're a small group of competent C++ programmers.

Is most of your gui simple forms and data? I've done critical care monitors and photobooths, and both seem too complex for html to me. I'd be afraid the HTML would be too slow to process and display, too eager to show scrollbars, too likely not to be optimized for the embedded gpu, etc. Have any experience there? The last time I tried going higher level I tried embedded flash, and that was too limiting.

Just for the record I'd never do an embedded gui in html. Well maybe not never, but not until it's common. Sounds like a nightmare of layers and compatibility issues. If you run into a problem, where do you even start to look?

I've been doing straight opengl-es on my devices and like not being locked into a gui platform. It makes even more sense to do it this way when you've got external buttons as inputs, because you're not even able to use slots and signals, which is the most useful part of qt.

On an embedded device you're gonna be shipping the runtime environment (aka the browser) also. So the typical challenges of compatibility with HTML5/JS of many moving target platforms doesn't really exist.


Forget everything you think you know. RPi is great for hobbyists but not what you want to put in your final product, or even really your prototype. Others have already mentioned the reasons. In web development, a commercial-quality stack and a hobbyist stack are essentially the same. With embedded development, they're far different and the commercial stack is still largely vendor-driven.

Call up TI and Freescale, let them know what you're up to, and they'll come demo/loan a few products. Talk to their engineers about your product at a high level so they can immediately drill in on a few options, then talk about hard requirements and even personal preferences/interests and they'll be happy to make recommendations on hardware, OS, language. The naming schemes and jargon can get confusing and features seem to overlap, so make sure to ask lots of questions.

Oftentimes the CPU will determine your OS; some will be more stable and have more complete drivers on Windows, some on Linux, some on specialty OSes. Of course you could take the Linux Kernel and tune it yourself, but unless you really know what you're doing that's not going to be the easiest route. Oftentimes you'll come across sites/videos showing how to get unsupported OS A to work on CPU B and get excited. Ignore those, as what the video doesn't show is all the things that still don't work, and that rabbit hole goes deep.

In general for your first project try to avoid straying too far from the beaten path and the recommendations you get from your vendor.

In addition, if you're still reading this, be ready for this to be not-the-most-interesting software project. Most of it will be reading vendor documentation and hunting down config errors. And compiling. And figuring out FDA stuff.

Coding-wise you probably won't learn a whole lot. You'll be using old technology and likely limited from using any fancy techniques you want to cook up. Accept that in advance. Thrive in that actually; force yourself to take the safe road software-wise so you can take the time to read through FDA docs and understand all the lingo better. Also spend time with your EE and learn more about that side of the coin. Spend time with the customer-facing side and learn the market. Learn some actual medicine so you're seeing the device as a tool rather than as a platform. All this stuff is way more valuable in the medical field than knowing the latest five JS frameworks (or Haskell plugins, or ruby test platforms, or whatever software-y stuff floats your boat).

Yes, I'm still reading this and I'm not afraid of working by the rules (or learning them). I'm just surprised how fragmented the whole embedded device stack is. The whole software development process could (and should) be more defined. Let me gather some more experience and see if I can come up with something more clever :).

In today's world, a truly embedded device implies an original firmware driving all the hardware on the device.

These are not so common anymore. Most devices use existing OS platforms depending on the task. - time/resource critical? real time OSes - GUI/multimedia/telephony? Android - a platform for you app? Linux

Dev tools are very dependent on the hardware platform. C is common, as is C++. This is also very vendor specific. Intellectual Property add-on's for different micro's, etc.

You really should consult an expert to get guidance - you can probably do the work yourself, but making the right initial hardware choices matching the requirements will help get everything else right afterwards. Once you have hardware chosen, the rest will fall into place.

I work at a research and design firm, our projects are making custom hardware for people who don't have the expertise or time. Our most frequent customers? "We got this started and now ..." Save yourself the headache of getting half way into the project and discovering a problem you can't solve, then need to hire a consultant to fix it, which will entail redoing and eliminating features because the initial decisions won't support the desired outcome. Make a consultation, and get set on the right path from the start.

I already talked to a bunch of consultants and everyone has a different "right path". Hence my question here to get it in front of a lot of experts.

The current state is that there is no real current state of programming on those devices. In the real embedded device field (things that need to be cheap but still need to fulfill mission critical tasks with more or less realtime requirements) you have choices between lots of CPU architectures, dozens of realtime OSes or directly going the bare metal route and you will mostly develop in C (or maybe a subset of C++ - I guess most embedded compilers are still not up to C++11).

Depending on your safety and realtime requirements you might be very restricted on component choice as well as coding style. E.g. for a deterministic system no heap allocations might be allowed. I guess that should be the case for most medial devices.

However your project sounds more like an embedded device in the sense of "user can't modify it", but it is still running some PC/Smartphone like applications. For that kind of stuff Android and Linux are often seen (in order from most-consumer like to most realtime capable). You get the benefit of being able to reuse standard drivers, frameworks and applications but being able to guarantee that it will always work is basically impossible.

Many devices therefore also use a multi-cpu architecture, where the criticial stuff is running on a microcontroller with RTOS and the fancy UI is running on a speedier SOC with a normal OS.

QNX and GreenHills Integrity are somewhere in the middle, by providing a realtime OS which still incorporates some consumer features and are often chosen if a realtime device should directly draw UI.

You should checkout the detailed requirements for your projects and what would be possible and accepted. I guess medical has some quite strong regulations for certifications. And I'm sure using normal consumer chips (no extended temperature range, no certificated manufacturing process, ...) will always be not possible.

> The current state is that there is no real current state of programming on those devices.

Yes, that's what I suspected but I feel there _should_ be some advices/best practices.

I wonder if there are any industry-focused resources (meetups, mailing lists, etc.) available, to offer specific guidance.

The mainstream OS choices are: Wind River VxWorks, Green Hills Integrity, BlackBerry QNX, Linux, Android, and (god forbid) some variant of Windows.

A great deal of embedded software is still done in C/C++. (Embedded is not a good environment for dynamic languages.)

Can I ask why you dislike dynamic languages in an embedded environment? Or why you think they are 'not good'?

Dynamic languages have unpredictable usage of memory. This is okay when you have many gigabytes of memory. This is not okay when you have a handful of megabytes (or a single gigabyte) of memory.

Being able to accurately profile how much memory will ever be in use is very valuable. Being able to prevent unexpected usage or stop-the-world collection issues means your device is that much safer.

Also, static languages are usually faster and more predictable, to a certain degree. This is advantageous in an industry that would rather spend years developing something than have a small chance of killing someone. Bugs are unacceptable.

If you're willing to pack an Intel Atom or something into your device, fine. But if you can get by with a tiny ARM chip, why shouldn't you?

But if you can get by with a tiny ARM chip, why shouldn't you?

Features, complexity, cost/availability of humans that can comprehend it, availability of existing software components, simplicity of software development process, etc.

ARM chips have feature parity with Atom chips except for their x86 architecture and power. ARM chips and their implementations are less complex, not more. Software components can be a bother, but ARM runs Linux perfectly well. If Linux doesn't suit your fancy, there are plenty of RTOS waiting for you.

And I kind of touched on the simplicity of software development. Yes, it's possible to quickly shove in a really tiny x86 computer and be done with it, but it ends up being cost prohibitive and energy hungry. Embedded environments are a place where you can get by with a tiny ARM chip; they're power efficient and cost effective. If you're in the embedded field, dev isn't as simple as a REPL, you have to put it directly on your device or otherwise emulate your device in some way. This forces you to come up with robust solutions.

Dev in this environment is already hard. It has to be robust anyway. Maybe we should pick the right tool for the job while we're at it.

And I am happy using a robust solution that I know won't fail me. If you're in a situation where failing to write bug-free code can kill, you would be too. Medical devices are one such field.

(But to each his own, of course! You can do awesome things with any tool.)

Complexity refers to overall cognitive load, hassle and lead time for developers and the business as a whole (hardware sourcing, cross-compiling, testing, etc.) and not just chip features.

Hard limits to processing power, memory, energy availability, etc. do exist in some applications, but the OP doesn't seem to have these issues.

Depends on how embedded. For something really mission critical you don't want any dynamic malloc at all even in C code. Allocate a set amount up front and manage that directly. For an android phone on the other hand, no problem.

I really like working with Ruby. But then I think about the way I use it. Web applications under constant flux. When a new version is released, I'm looking to start using it ASAP.

But when I think of embedded devices, I'm looking across my desk at a Cisco router with an eight year old firmware, and an HP LaserJet I recall updating at least 12 years ago. I shudder at the idea of writing Ruby and having it run in production for 12 years.

This doesn't criticise "dynamic" languages directly - but I can only possibly think of C, and similar languages, which wouldn't suffer this issue. Hopefully Rust can improve this situation.

That said, I'm on the fence about Erlang in this situation.

I'd go with a Raspberry Pi or other linux supported board. Particularly for prototyping.

If you're looking at building an interface, are you considering web-based?

I'm going to plug my own little project here, I'm also new to embedded development, but hate how everything is hard-coded to the platform you build on. I built a hardware agnostic library to make it easy to run your business logic on different hardware. It's in javascript (I've asked anonymous_ian why he doesn't like dynamic languages in embedded). http://getfavor.net

Keen to hear your thoughts.

Generally for embedded work, you want to avoid any and all complexity as much as humanly possible. In most critical embedded programming its all statically allocated and has shallow call stacks (eg: a call depth of more than 8 is forbidden).

For devices that connect to the internet, I prefer a two processor approach where a very simple processor does the critical work (eg: being a thermostat), and a more general purpose processor (raspberry pi, etc) handles the human interaction (GUI or webpage) and uses your more standard programming approaches. This way if the main CPU suffers a fault, it doesn't make the device worse than useless as the simple processor is doing things like making sure the commanded temperature is not too hot/cold, the temperature actually is changing when the heat is on, etc.

With dynamic languages or even using dynamic memory you can run into unpredictable pauses that can literally cause things to catch on fire (see 3d printer extruder fires).

A good example of the lengths you go through for embedded programming is the JPL coding standard: http://lars-lab.jpl.nasa.gov/JPL_Coding_Standard_C.pdf

edit: fixed grammar

It's a good idea but I'd be worried it isn't "snappy" enough on (somewhat slow) embedded devices.

I think it depends on the application, I agree it isn't for everything.

What I find a bit funny about some embedded stuff is that I don't think performance is always as important as in other systems.

Consider something like Nest. If it takes 50ms or 500ms for the thermastat to respond, what is the implications. Compare that to a webpage, and all of a sudden, performance isn't a prime factor. Another example is a decibel meter I'm currently building. I need it to send data to a server when sounds reach above a certain level. How important is performance in that scenario. As well as things like Amazon's nifty little ordering button, I'm quite sure performance doesn't really matter on that.

So, in devices where performance is a primary concern, at the moment, javascript probably isn't the way to go. Though, with progress in things like asm.js, that may change in the near future too.

I'm not suggesting this is the tool for everything, but as I look at it, IoT probably has fewer use-cases where performance is a primary concern than other environments.

Of course, probably all safety systems, self-driving cars, drones, etc. etc. are not recommended to be done in JS... yet.. :)

For the thermostat it could well depend on the context to how much the latency matters.

If your recording average ambient temperature it probably doesn't matter much. If instead it is part of a physics experiment it could matter a lot. If it is part of some industrial control system [1] it could be critical.

For the ordering button the time taken doesn't matter, but the less time it takes (especially with the wifi hardware), is going to be a factor in how much power it consumes, and therefore battery life.

[1] https://en.wikipedia.org/wiki/Control_theory

Reading these comments tells me that the state of embedded programming is as strong and as fragmented as current web development.

My company uses VxWorks, MQX, Android, Windows CE and Linux. We're dropping VxWorks because it's costly and cumbersome. MQX is being replaced by whatever Freescale/NXP is pushing now, we put a lot of resources into the older version of MQX and aren't following them into their new incarnation. We use Android for special devices, but it seems to take a good amount of work to put it on custom hardware.

The OS that makes the most sense for embedded work nine times out of ten is Linux. It's widely used, documented enough, and has the most vendor support.

A good setup? You can get an AM335x EVM with an LCD, which is the same processor as a Rasberry Pi. I believe i.MX6 has an EVM with an LCD as well. Bottom line, you should talk to vendors, to see what they have to offer. Since it sounds like you have limited experience in this area, working with a vendor that will give you support is probably the most important thing.

Your developer tools are entirely dependent on your OS and chip. Ultimately, doing embedded work you're most likely going to use C/C++, but if you're on Linux or any modern operating system, then you have the ability to use a scripting language.

I recently implemented something like this, albeit for a research-use-only device. We used an embedded intel board with separate PCAP touchscreen connected to the LVDS+USB interface. X.org supported it out of the box. Out product runs on linux/C++ with html/js UI. There are a couple of embeddable web browsers. I've been very satisfied with a simple X11 browser based on Chromium embedded framework.

PCAP touchscreen worked well with double-gloved fingers and there are touch controllers that will reject a stray water droplet on the screen.

I'm not sure about the OS considerations for medical device use, but for linux you might consider the yocto project, which is designed for this kind of stripped down deployment. It will limit the number of moving parts you have to validate. If you're less devoted to the open source route, check out QT's boot-to-qt offering. QT ui's are common in this field and I believe there are supported boards at a range of price points.

You wouldn't want to hitch your design to Rasperry Pi or Beaglebone or such because of the short support lifetime of individual models and components. Part of what you pay for with the industrial vendors is a commitment to supply the system for 5-10 years.

If any of this is helpful, drop a note using the email in my profile.

Hi, I made a few devices like that in the medical sector. Back then it was Windows CE, today Linux would be the way to go i guess. If you are like me, using Raspberry pi & co after decades of windows development, check out VisualGDB. Microsoft recently released a similar product as native Visual Studio Plugin. I understand they plan to support cross compiling as well. That will be so awesome.

no one tried Win 10 IOT yet? ;)

We used an Android tablet for the UI here: http://www.centervue.com/productdb19.html?id=919

Other stuff is separate.

that looks great. And you used C++ or Java?

Android is programmed in Java. Other parts of the system were done with other languages.

That architecture worked for us - it may well not work for other needs, depending on communication latencies, costs, and so on.

You may be interested in reach technology's display modules. You program the GUI and send it some commands and it will display whatever you want. You can write some really snazzy Qt touch widgets. I interned for these guys a couple years ago and it sounds very similar to what you're asking for. http://www.reachtech.com/

This is a framework for medical software development, they have a few papers talking about the experience and benefits :


Raspberry Pi isn't an enterprise/medical grade device in my opinion. I wouldn't want to have a SD card as primary storage either.

Do you need to integrate a lot of sensors or have a lot of IO?

No, data generation and collection is done by other components. I only need to send a start signal to those (through USB). This is only for UI.

You may want to consider something like rs485 instead of USB. One less attack vector for bugs and viruses. Or users plugging in mice or whatever and expecting them to work.

Actually if that is your use case, why not use a tablet? You can get an enterprise level of support on the hardware side, and run on linux, android, or windows.

usage of the Leap motion controller should actually be encouraged more, right here!

I don't think there's any simple answer to your specific questions based on what you've provided so far. I'm going to word vomit a bunch of stuff that might, with some Google/Wikipedia searches, help give you a better view (I hope!). I hope this is helpful.

So much depends on the stage of the development process you're in, what the goal is of this project (i.e., internal testing, research use only, human use), when you're going under design control, and what market/regulatory framework you're working under (FDA, CE, CFDA, etc.).

Your requirements documents are a good guide here. I personally separate them as Engineering Requirements Document (ERD), Software Requirements Document (SRD), and Market Requirements Document (MRD); your questions would touch all of these.

You've hinted that you are wanting a display and also wanting to do calculations; as others have suggested, good practice would be to separate this functionality. Your ERD/MRD should specify a minimum duration for projected parts availability. Dealing with part variations is a pain, but having to replace an end-of-life'd (EOL'd) part is worse. For your own personal sanity, you probably shouldn't be looking at consumer-level parts and instead look for more industrial-style parts (a weak example might be to use a BeagleCore instead of an RPi for an internal project). You'll need to be mindful of your environmental requirements too, if for example your design called for a sealed enclosure that could greatly impact the parts you could include in your design due to thermal management. Even when using consumer level stuff, say USB, you should look for the industrial/ruggedized versions (e.g., that will have retaining screws).

Touchscreens for medical devices are, in my experience, a mixed bag depending on the application. If you're using a good design, they can be easier to keep clean and disinfected vs. buttons. On the other hand, depending on the touch technology you use and the type of information you're displaying, the display may not look great (i.e., a 2D plot can look fine on a resistive touch panel, but video may look foggy). Depending on what your users are doing, you could have smudges of some sort of fluid (or boogers, or giblets) obscuring info on the display. Also, I think users expect a modern multitouch experience now and I personally haven't been thrilled with that style of interface on anything other than capacitive displays.

If you're using a contract manufacturer (CM), they are a great resource. You'll be dealing with them during design for manufacture (DFM), but it's a good idea to engage them early so that you can design for DFM (if that makes sense). I don't know what your expected volumes will be, but on things other than durable goods (like scalpels), "high volume" medical quantities are considered "low volume" for other areas and this will greatly influence your design. I also found it frustrating because you're limited to suppliers/distributors/CMs/whatever that will be satisfied with the lower volumes.

Handwavy answers are that your safety-critical stuff is going to be hard real-time and running an OS developed under an appropriate quality system (QMS) like vxWorks, Integrity, QNX, etc. I don't have experience with it, but there is also FreeRTOS with an SIL (safety integrity level) 3 SafeRTOS option that could be an interesting contender. The software you develop will also need to be developed under a QMS as a broader part of your software development lifecycle (SLDC) and greater product lifecycle management (PLM). There are guidelines to use software of unknown pedigree (SOUP) by bringing it under your QMS, but this can be undesirable depending on circumstances. Most commonly the software is developed under C or C++ with an appropriate coding, review, and testing standard; for a greenfield project I think that considering Ada/SPARK or Java (with the appropriate compiler/VM) could be smart.

For things outside of the safety-critical areas, you have greater options, and it might be largely sufficient to develop under a QMS. You're going to need to be very mindful of software licenses to make sure that they are aligned with your project's requirements. If you're set on Linux then Yocto would be worth exploring, but NetBSD would be my first consideration due to licensing. For your GUI, FLTK could be worth considering if it met your requirements. Even if your product is Class A, it's maybe helpful to keep the more stringent requirements for Class B/C products in mind, so that you can (when and if it makes sense) develop expertise and institutional knowledge for Class B/C products later on.

I'd recommend the Stanford Biodesign textbook as a one resource to help get up to speed: http://www.amazon.com/Biodesign-Process-Innovating-Medical-T...

Finally, considering your questions and I say this in the kindest intentions, you should be participating in the process but not be signing the decisions. If your company doesn't have the expertise in house, it would be worthwhile to either bring someone on board or engage a consultant.

Good luck!

This advice reflects what I've seen in safety-critical field reading on all kinds of deployments. This plus other comments to use simplest microcontrollers with tiny RTOS or no OS using simplified software. Good advice and good alias.

Thanks and thanks. To further bloviate, Tcl is my not-so-secret weapon. I first used it when doing exploratory work with some embedded one-off instruments and it made an impression because those doohickies are still running strong to this day, well beyond the design goal. They had sensors that were communicating over a serial console: Tcl made interfacing quick and painless, and a control GUI in Tk practically built itself. It's robust and a known quantity (flaws and all). There's also some level of je ne sais quoi where, like Lisp, I don't feel like I will paint myself into a corner: need more performance? drop down to C. language is limiting? make a better DSL. So I guess I'm either a fan or delusional (likely, both).

I don't personally know of anyone in medical using it, but your comment about no OS does prod me to say that I have wondered if running Forth on bare metal for medical is viable and/or desirable on a greenfield project. I'm not saying I would do it, but I would happily cheer on someone else. (OP, don't do this, I'm delusional)

And OP, I really hope that my wall of text won't be discouraging in any way. Personally, I found that documentation is (a la Homer Simpson) the cause of, and solution to, all of life's problems in a regulated environment. If you master that, document honestly and eagerly, it's downhill from there and life will be good.

I never thought of Tcl to be used in safety- or security-critical systems. It's certainly simple and easy to use. We played with it for agent-oriented programming back in the day. Had serious, design-level issues for security and was harder to parse than some others (eg LISP's).

Rather than just a critique, I'm curious where and how you use it with what benefits it brings you? I could certainly see it in UI or untrusted parts. Maybe also during development in a way that auto-generates C or something that runs in production.

I only use it like you guessed (and what I think it's best/appropriate for): gluing things together, embedded scripting, and Tk on internal tools/gizmos and early prototypes. I wasn't clear earlier, sorry.

To be perfectly fair, on the decision matrix Tcl gets a helpful boost due to my own familiarity with it and playing nice with C; if a butterfly had flapped it's wings just a little bit differently or I was starting out today I would probably be using something else. Probably Java. Ada would be great, but the lack of ecosystem diversity makes me apprehensive.

Gotcha. Look into REBOL/RED while you're at it. RED is a REBOL clone/modification for system programming. They're LISP like advantages without the parenthesis and having small footprint. RED is already used in one OS project. I keep thinking of modifying it for system use. People keep telling me to check out Nim as it's like Pascal and Python combined with extra safety, macros, and C code generation.

So, there you have it: REBOL, RED, and Nim. Maybe Ivory language from Galois, too, but you need to know Haskell for that.

I remember playing around with REBOL a long time ago and it seemed useful and practical, but for reasons long forgotten I never used it. Maybe licensing? RED does look cool and I hadn't heard of it before, thanks for the tip.

Don't have any experience with Nim, but I share your interest for the same reasons. Haskell and I don't get along, maybe I'm too old and set in my ways?

Did you catch the link to/discussion of Little shared on here a few days ago? It might be of interest. https://news.ycombinator.com/item?id=11530097

Me too on Haskell. Mainly plan to try other two. Saw Little but didn't really get it past adoption. Then, your comment led me to the Why page which told me that was exactly the point. Along with programming in the large support. Funnier when I found out the boss wrote that.

I'd say it's not a good language to go with today for same reasons I'd say Tcl isn't. It's an interesting improvement especially to get better syntax with legacy compatibility with field-proven, TCL modules. And TK is still the shit for portable, easy-to-build GUI's. A TCL shop should definitely experiment with it and maybe incrementally upgrade their code. I think I'd be fine with a more typed, efficient, and fixed version of Tcl for some apps. Especially a command shell replacement or prototyping system.

Just not production unless it's non-critical like what you use it for. I don't even use it to glue to critical things as the glue is part of the TCB to a degree. Gotta find close-to-metal, abstract, typed, efficient, and macro'd stuff to replace it. At least we have contenders.

Rosea (http://chiselapp.com/user/mangoa01/repository/mrtools/wiki?n...) is a tcl-based tool designed specifically for embedded medical applications where provable quality is critical. I think it deserves more attention.

This is really interesting. Do you know of anyone using it in production?

The author works for a custom embedded code shop, and I believe has been using it and its antecedents for years for his customers. He's on my short list of personal coding heroes.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact