Hacker News new | comments | show | ask | jobs | submit login
The Internet of Things – A Disaster (gekk.info)
195 points by stock_toaster 99 days ago | hide | past | web | 142 comments | favorite



With apologies to the late Mitch Hedberg, always design your device to fail like an escalator, not an elevator. When an escalator fails, it becomes stairs. When an elevator fails, it becomes jail.

Don't design a computer that is a smoke detector, design a smoke detector that happens to also have a computer. A kernel panic or complete loss of computing should leave behind a traditional smoke detector in a fancy looking case.


As an exercise of the limits of this advice, how does this apply to designing elevators, short of "never design elevators"?


hard brakes that apply when doors are opened manually, and an emergency panel behind which a small ladder is located.


The costing difference between an elevator and stairs is substantial. Especially when you have to have stairs when you get an elevator by building code.


By having a manual fail safe that operates mechanically. i.e., without power.


Controlled descent to the next (lowest) floor, at which the properly aligned doors can be manually opened.


I agree (of course). It's even a bigger problem for me with the Nest thermostat. Living somewhere that gets cold in the winter I absolutely won't tolerate an added failure mode in my heating system. And I have other ways to monitor temperature that happen out-of-band of my furnace control.


A couple of years ago I had a 'failure mode' problem with my heating system. A winter storm knocked out power in my area. My heating system was a natural-gas-fired boiler that produced steam, which thanks to gravity would rise up through my radiators, condense to water, and flow back to the boiler. The power outage didn't stop the gas, so the only thing that kept my furnace from running was the electric thermostat that sent on/off signals to the electric device that controlled the gas valve and sparked an ignitor.

The thermostat itself had a battery backup, so it was trying to turn the heat on as my house cooled off. But the controller and ignitor had no backup. I couldn't even use a match to light the thing manually, because there was no way to open the gas valve.

I understand the safety implications of adding a manual valve and spark ignitor to a furnace, but letting my pipes freeze (and me too) also has safety implications. Besides, there's probably already a thermo-mechanical safety mechanism that would slam the valve shut if the furnace gets too hot.

Luckily, the power came back on in a few days, before my house got colder than light-jacket conditions. No frozen pipes.


Unfortunately the ones that were immune to this employed Mercury switches, which I don't believe are sold any more. We had an older gas heating stove which had this and worked even when the electricity was out (particularly good in the winter when heating with electricity). Eventually that part failed and being an older model, that part was discontinued by the manufacturer. This forced us to get a "new" stove which now uses an electrical system to control it, and which doesn't have a battery backup integrated. Technically one could add a UPS, but since it also uses an electrical fan it would probably empty that ups within a few hours.


I have an electric heatpump for heating and hot water. The first thing I did when moving in to the house was install a reasonably efficient, manual wood burning stove, with 2 m3 of wood in the basement. I have had the heat pump fail in the middle of the winter, but the wood burning avoided any freezing disaster.


Of course, that assumes you're at home. I added a wood stove (in addition to a fireplace that I already had) partly as a backup. But I also travel a fair bit.


I don't know if these are available for boilers, but many natural-gas-fired water heater controllers are powered by a peltier device, which is heated by a small pilot flame.


Any reason you didn't have a backup generator? I would think if you were at risk of frozen pipes without electricity, it would be a wise investment.


And so a problem caused by having a more complicated than necessary system is mitigated by adding in another very complex system for redundancy.

Now this can work, indeed as a engineer in SaaS, it's my job to make it work. But see how to make it work you have to hire engineers to nurse the all those systems along.


That's the situation for pretty much everyone in northern states. A generator that's wired in with propane tanks is a pretty significant expense and I'm still not sure how reliable it all is if no one is at home.


If the control line for your furnace is just a 24V loop (or a dry loop), a good hint is that you can keep a traditional thermostat in parallel with your 'smart" one - so that if either closes the loop, the furnace fires.

Then you can aim for "comfortable" with your "smart" one and "safe" with your real one.


That's probably true. And should also be part of the "smart" thermostat's design.


A nest failure in brutal New Orleans heat while I am out for a few days might spell death for one of my cats. I don't feel like Google takes their products seriously enough to trust them with the life of my pets.


This is a good point, but if I had pets, I don't think I would trust any air condition system at all to keep them alive for a few days.

But then I don't really know what it's like to own pets, and I don't know how housing in New Orleans works either.


I wonder how modern cars assisted steering and brakes fail when they do.


You can still steer and stop a car w/out power. It just takes a lot more force than you're used to.


Also, in an ICE car, the pump that pressurizes the power steering/breaking system is directly powered by belts off the engine. It's not electric and a computer does not mediate the power transfer.

Which really makes me think what happens in a Tesla when the computer shits the bed...


Escalators have bad failure modes too. An escalator can function as stairs because it has a set of brakes. If those brakes fail, gravity will send everyone on the belt into a high speed collision with the floor.


Why does there always have to be one bikeshedder in every HN thread who reads too much into another's analogy and comments about how it's flawed/wrong/inaccurate, thereby completely missing the analogy's point?


Because those people haven't internalized the idea of an analogy. An analogy is necessarily an imperfect abstraction, and some people think finding out those imperfections means you somehow found a flaw somewhere. Their brain (not singling out our friend here, just people in general) goes towards the exceptions in a model, because its far easier to find superficial flaws than it is to realize deeper meaning.


This is a wonderful observation. It is a lesson that I took far too long to learn as a young and somewhat insecure programmer, thereby becoming entangled in many unproductive arguments.


Perhaps not this case, but analogies when used as persuasion devices, can get quite dangerous. It's important to realize when they are false, or misleading, or overly reductionist, in order not to fall for them.

I certainly fell for a few false ones in the past. Sometimes only deeper meaning is that someone's trying to manipulate you.


I also was just thinking about those poor woman who got killed in a Chinese escalator accident a few years ago, totally missing the analogy's point.

But that's just a problem with analogies, they trigger different ideas in different people's heads.


Just wanted to thank you for adding the term "bikeshedder" to my vocabulary.


Being pedantic can have it's merits but yes, it can get old.


The same is true of elevators, though, is it not? If we're postulating motor failure and brake failure, I'd still rather take my chances with a broken escalator.


There has to be way to much friction in the mechanics for that to happen. No?

Mitch. One of the greats. Uncopyable.


Nope - Elevator failure in Hong Kong

https://www.youtube.com/watch?v=ADb57ysvCbU


/s/Elevator/Escalator/


Do escalators not come with a ratcheting mechanism to prevent exactly that from happening? Seems to me like a serious design error if not..


I guess they were designed to run in both directions, meaning you can't mechanically lock the "down" direction.


I mean, it's probably a Chinese escalator. We didn't get the term "Shenzhen Quality" out of thin air.


That seems a bit of a misnomer when everything is made in Shenzen.


Most escalators used outside of China are not made in Shenzhen.


Funny, it seems increasingly apt to me.


Just sayin', if I were building an escalator I would definitely use a wormdrive (or similar) gearbox, which physically cannot be backdriven.


I'll bite. The analogy has a failing elevator becoming a jail. That sounds like motor but not brake failure, i.e. the elevator getting stuck. If both motors and brakes failed, an elevator becomes a coffin not a jail.


There is a battle between the old school people who want to do everything with microcontrollers and who measure RAM in Kilobytes, and the new generation who want to embed a full microprocessor + OS in everything.

When it comes to capabilities, the people arguing for entire OSs generally win. Go with Linux, get an HTTPS stack, get XML parsing, get JSON parsing, don't worry so much about RAM. And heck, the price different between a microprocessor and a microcontroller is minimal now days, a couple bucks in large quantity (especially if you have to outfit the microcontroller with more chips to add functionality).

The microcontroller people are right in one way though. Their products are simpler. Battery life will be better (it is easier to control power usage when you start with nothing and build up, then trying to minimize an entire OS down) and the code will be purpose built with less complexity to it.

The ESP8266, ESP32, and Arduino are one view of things, Raspberry Pi and "Things with Android Running On Them" are another.

Of course against all of this, the analog EEs laugh at us and continue to make things that just /work/.


I was hired to design embedded electronics in my last job. Came a project. I would have designed it with one microcontroller in the 50-120 MHz range, or two of those to make it easy to deal with all interfaces, and called it a day (on the electronic side) because almost nothing more was needed. It was basically just a glorified RS485 extender + Web -> RS485 interface. But our boss decided of the design: mandatory Linux and Python scripting, so we ended up with a Big Mama Ubuntu on a 1 GHz dual core processor with 512 MB DDR2 RAM... Because yeah, on the software side, you 'just' have to throw ready-made drivers and link pieces of ready-made software developed for the desktop. Supposedly...

The board ended up with 1500 components, it took 1 extra year to get just half of the features kinda working, the rest of the features was discarded. And the power budget was blown, which probably led to interesting negotiations with other involved bodies, but I had already quit at that time.

Obviously, 'embedded' meant very different things for me and for him.


It's fun being on the other end of the scale - the last major commercial project I did used an Atmel Atmega1284. Most other systems in this space go down the route of FGPA which is hell to support. I was actually amazed we got away with an 8-bit micro, but it was a great example of pushing the chip to its limits and comfortably achieving the goal of the project.


Let me guess, every missed milestone, was traded with the customer against additional features (for free of course?).

Some CEOs just try to postpone the day of deafeat to infinity by allowing customers to glue stuff to the device. Its like the captain of a sinking ship deciding to freeze all the water in it - making it float by the virtue of beeing a iceberg with scrap on the side.

It ponzi-scheme kinda works for a hellish year, and then it usually implodes under the weight of a crust of hacks.


So, good case study

At the end of the day, it has nothing to do with what's good design or with which generation's philosophy we're going on. It has a bit to do with bragging rights but also any company selling a trivial doo-dad has a lot to gain in the vague importance scale by including including a whole OS with their doo-dad. They don't quite know what, they may not quite be thinking "now at least we're as important as a Russian mafioso running a giant botnet" but inherently the management has some kind of dream that's more likely to be satisfied by a million Ubuntu instances than a million light bulbs.

Just for example, they haven't got the gall or tech to put push ads from your light bulbs YET but I'd guess that's one of the things floating around some board tables somewhere ("but how to offer value with hmm..").

Those urges won't go away. They care more about themselves than the public or their customers? Who'd have thunk.

Welcome to the "everything's adversarial" era...


> When it comes to capabilities, the people arguing for entire OSs generally win. Go with Linux, get an HTTPS stack, get XML parsing, get JSON parsing, don't worry so much about RAM.

Not where I come from. I have to fight to get 32-bit microcontrollers accepted. And there is zero reason to use 32-bit controllers over 8-bit controllers on new projects nowadays. Zero. None.

> And heck, the price different between a microprocessor and a microcontroller is minimal now days, a couple bucks in large quantity (especially if you have to outfit the microcontroller with more chips to add functionality).

Not even close. An ARM Cortex M0 is 70 cents. The cheapest ARM Cortex A7 is $10.

And the point of the microcontroller is that functionality is part of the part and doesn't require external chips (or at least a very minimal number).


> Of course against all of this, the analog EEs laugh at us and continue to make things that just /work/.

Uh, yeah, no. LOL. We just had an entire article dedicated to the weird and wacky ways that things can fail in the analog world (corrosion, light, standing waves, etc.). https://news.ycombinator.com/item?id=14765868

There is a reason everybody moves as much of analog and RF circuitry into digital as they can as soon as they can.

Analog and RF are notoriously unreliable and disgustingly difficult to debug. Once things hit digital land, they become repeatable and debuggable.

Hunting down a stack smasher is nothing compared to hunting down some screwball oscillation in an amplifier.


Besides, the individual rf components like synthesizers, attenuators, switches etc. that make up a system are all digitally controlled. You don't need much of a microcontroller to control them - I generally just use a PIC - but you need something.

And once you have a product with a serial or USB interface, some manager is sure to ask if you can add a LAN connection.


Yeah, and - to extend the smoke detector discussion - most modern smoke detectors are photoelectric and seem to have a somewhat involved state machine that periodically checks that the sensor is still working, calibrates for ambient light and component ageing, reduces false triggering, etc.


I was once told a long time ago that analog is proof that god hates us :-)


More capable devices are a liability if you don't really need them. If your product can work with a low end 8 bit microcontroller instead of a full Linux stack on a SoC, that also minimizes attack surface and rewards for attackers. It won't have enough resources to mine cryptocoins or run the Mirai botnet code even if it could be subverted.


Microcontroller? Often what you need is digital logic circuits, not even a microcontroller. (Of course, in your last sentence, you illustrate that even that may be too much.)


We're at the point where a microcontroller is simpler to manufacture and cheaper than a bunch of popcorn logic.


And in many cases, dramatically less reliable. If you care about such things.


Not every IoT company is fighting to get a full blown OS on everything. (speaking as a non-rep, please don't tell my boss I'm posting this :D) Here, I've noticed the hardware engineers regularly lambasting companies that "are just going to use linux," or think that the raspberry pi prototype they got running full blown linux means the solution at scale should be the same, when a simple microcontroller with maybe a hundred kilobytes max of RAM would do.


I generally fall into the microcontroller-fogey side of these debates. But the other side has an advantage in that "just use linux" is an easy, dumb choice that brings you into contact with the whole unix ecosystem.

There's always a certain cheezyness to your tools when doing what I think of as real embedded work. Probably the only nice part of it is the basic GNU compiler toolchain (if present). The rest -- including any high level development tools -- are either closed source, vendor specific stuff of varying quality or low quality half dead open-source projects.


Is there software that you can run on a Pi that emulates a microcontroller that uses the Pi's I/O pins? That might be an easier to use way to prototype, with the ability to restart the microcontroller while the host is still functioning. It could also be a nice way to emulate additional IoT devices in a neat little package, which might help simulations and debugging.


I think you're asking about writing "direct to the metal" software on the Pi, which I hope someone will address here. To answer your question indirectly, there are ways to bend Linux to do realtime like rt-preempt [0] but it doesn't go as far as RTLinux [1], which virtualizes the entire OS to run under a hypervisor that is essentially just a service routine for a hardware nonmaskable interrupt. Linux isn't even aware it's being preempted in this model. It's a beautiful approach, but RTLinux was proprietary and hard to write code for (because, interrupt handler), so it never achieved much traction.

[0] https://rt.wiki.kernel.org/index.php/RT_PREEMPT_HOWTO

[1] https://en.m.wikipedia.org/wiki/RTLinux


I think the problem is the realtime.

You can certainly write software that emulates a microcontroller on the Pi, but the problem is that it can't guarantee timing, since it's running on Linux fundamentally, and other processes can pre-empt it.

For certain things it doesn't matter as much, such as for a door lock or something like that. Then you gotta worry about power consumption, since running cables to your lock is problematic, and it has to run on batteries for months at a time.

That's a reason why realtime controllers can't run on Linux - if you were controlling an industrial robot that controls welding temperatures and times, even like 300 ms off can drastically affect the finished product with fit and finish and metal heat treating.

"This whole batch of parts worth hundreds of thousands of dollars went bad!"

"Well, the network is down, and the robot controller was busy trying to reconnect to the network, so that's why the welds were formed over 300 ms more than they should have been."

That can't cut it in industrial applications.


The downside of a full blown OS is a lifetime of security updates and maintenance.


... or not. The "or not" is the real downside, and its what you're gonna get (see smart TVs, etc)


Ignoring the fact doesn't change it.


GP meant that companies aren't going to supply updates and maintenance, and that users would suffer for it. He wasn't ignoring any facts.


If users are being quantifiably harmed by lax IoT security then there will inevitably be lawsuits which will adversely affect the company that chose to pretend that Updates aren't necessary for a device with a full blown OS.

It might be hard to prove that someone has hacked your smart tv and is causing you monetary damages but all it takes is for one smart toaster to burn down a house to set a precedent for liability.

The question the manufacture needs to determine is the risk / reward value for their device and what could happen if it's hacked.


I worked at Nest. This blog post is full of BS.

For example:

"For the record, I read several stories about the Nest Protect going into permanent alarm, and you know what my hunch is? The same thing I always assume: "Dumb Linux crap." The culprit was probably some shell script that opened /opt/smoked/detect and output 1 to it and then left the file locked so nothing else could touch it or forgot to delete a pid file or whatever. This is what I always assume when I read about Linux integrated devices screwing up, and on the occasions I've actually heard what the cause was I usually end up right."

As I mention in a separate comment, iFixit's teardown identifies the 100Mhz, 128kB RAM MCU which is the Protect's brain. Such a device does not run Linux. Had the author done any research at all, they would have found this.

Instead, they waded into an unfamiliar topic with the knowledge of a novice and the arrogance of an expert - precisely the accusation they level against Nest.

Furthermore:

"The Smart device engineer does not begin by disassembling ten smoke alarms to see how they work. They do not begin by reading papers written by fire chiefs and scientists. They do not look at the statistics on fire-related deaths with and without smoke alarms of different eras (although the marketing department director does)"

All this due diligence and much more was done. The author's lazy speculation insults me and my former co-workers.

This blog post gets lots more important stuff wrong. Suffice it to say that today the Protect is very well-rated by consumers and safety professionals.

The IoT field as a whole is a mess, and deserves much of the author's criticism. Nest, specifically, does not.


Fair enough; he got some of the Nest specifics wrong. But some of his Nest points were accurate. I have a Nest thermostat--which is fortunately much less life-critical than a smoke alarm--and I finally disconnected it from my wifi router because I got tired of the constant firmware updates that added no value while changing the UI, causing well-publicized failures, and taking away features that I liked (e.g. the ability to manually set Away mode). I don't mess with my thermostat often, but when I do I don't want to have to spend five minutes rediscovering its UI. I will never buy another Nest product because of that nonsense, and that's entirely on you and your team.

On the plus side it still works as a dumb disconnected thermostat, so thank you for that (seriously).


Besides that I also disliked the philosophy.

There are real disadvantages of existing hardware.

Take a lock, if you have a cilinder because you need a physical key you have an attack point. Add electronics on top and you only expand the attack service. Get rid of the exposed cilinder! Don't make a plane that flaps like a bird.

Another argument. As an electrical engineer most of my graduates where reinventing wheels with FPGAs. That something is hardware does not mean it cannot contain bugs, I assure you. :-)

Electronics itself improves as well. From lifetimes to power consumption. Everybody knows you should replace batteries of smoke alarms, but people don't always care: turn off the alarm and forget to place a new. There are many ways in which safety can be improved: test the working regularly, have notification messages go to the user if it is not, don't rely on battery only, but also connect the grid, make sure that false alarms can be quickly disabled, have campaigns so that everyone has them, make installation easier, etc. I wouldn't be so sure as the writer that Nest has a netto negative effect because they use a microcontroller in the loop.


> All this due diligence and much more was done.

I don't have any connection with Nest Protect and don't own one, but the fact that the Protect does not use an ionisation sensor (they suck because they suck at detecting fires early, it has nothing to do with the harmless radioactivity) and used a dual-wavelength photoelectric sensor gives them high marks in my book.

All the hardcore smoke detection equipment -- both the aspirating kinds ("VESDA") and the open-space beam/imaging kinds -- use dual-wavelength sensing. I cannot speak to all the "smart"/"connected" aspects of the device but the smoke sensor seems far better than that of every other residential smoke detector, and it would be very nice if other residential smoke detectors ditched the ionisation sensor and went to a dual-wavelength design.


Most new smoke detectors are photoelectric rather than ionisation-based these days because China can pump basic models out for $2-3 shipped in quantity one. (In fact, the cheapest I've heard of them going for is £1 for a two-pack in a UK discount store recently.) Nest got a bunch of flack because apparently they screwed up their original single-wavelength implementation and had obnoxious problems with false alarms.


Thank you. I find this blog post to not be worth the credibility of the HN front page.


Can you expand a little more? I'm really curious what else he got wrong.


I don't think a poorly designed and engineered product, that happens to connect to the internet (and to be fair is marketed as "IoT") means that the entire concept of "connecting devices to the internet" is "bullshit."

I agree that there's a lot of marketing Bullshytt (to steal from neal stephenson) regarding IoT, but when we get to the root of what the words really mean (connecting stuff to the internet), I think it's the natural progression of technology. Why wouldn't you connect your field of oil pumps' valve readings to the internet (as well as having the manual needle-gauge backup) so you don't have to have a dude drive out every day to take readings? Why wouldn't you connect your thermostat to the internet so you can override the schedule when it turns out you need to stay late at work? Etc.

I don't get why everyone has to lump together the marketing bullshit that obviously is overstated with IoT, and then go "hence, IoT will fail and is crap."


The author mentions that he would like IoT for his washing machine, so I think you’re misrepresenting his sentiment. Most likely, he’s angry with the current engineering standards of IoT, more specifically Nest


Cultures are extremely hard to change, and there is a culture in hardware manufacturing to expect a product lifecycle of production, distribution, sale, and returns if something physically breaks.

The Internet connectivity is not the real problem, but it is the greatest show of the problem in commodity hardware (and this does apply to a lot of computers, particularly phones - the more disposable the device the more prevalent the behavior) that manufacturers believe it is ethical to produce a product nobody can maintain or repair on their own. These companies also have limited to no long term plan for support. The problem is that general purpose computers are not suited to fire and forget when interacting at levels of abstraction as large as TCP networking.

I don't think this will ever be solved, because average consumers are incredibly ignorant of why it is a problem to begin with, and thus they do not care if they don't have rights to repair the software running their devices. At least until something goes wrong, then you get isolated outrage that fades away when the computer is too "complicated" or "magic" to understand and thus the ignorance drives people to simply assume it isn't a solvable problem, when it really is - we have communities maintaining support for chipsets manufactured in the 70s with modern software, but they can only emerge and grow when devices are either popular enough to incentivize the massive effort require to reverse engineer the software entirely or have publicized documents on how to modify them.

As long as it isn't generally considered unethical to provide black box software running the hardware you buy (when likewise the design documents of said hardware are also often proprietary trade secrets) then nothing will change. People are offended if they are sold a car whose engine they cannot open and whose parts they cannot replace (up right until said car has a computer in it, at which point all expectations of right to repair go out the window because the computers are "magic" in ways belts and radiators are not) but they happily buy phones they can never disassemble.


You can network things without exposing them to the world.


The author makes a very valid point about people replacing embedded systems with Linux. I have a home automation system that has been running for over a decade now, and has several generations of technology in it. The order of failure is, from highest to lowest:- 1. the desktop I receive status updates on; 2. the Raspberry Pi I added to monitor the remote nodes and email me status updates; 3. the batteries; 4. the physical parts: motors, etc; 5. the embedded systems that read the sensors and control the motors; 6. the analog circuitry.


do you happen to have a writeup of your HA system and how you put it together?


Not the parent, but I will probably be writing up mine soon. My choice of hardware is heavily inspired by the notion that devices function independently of the computer, and the computer functions independently of the Internet.

...My home automation software has bugs, and I've had plenty of problems with it. But I've yet to have a problem with my thermostat, lights, or smoke detectors. Despite all of them being linked to it.


OK, great. Will look for a reply with a link.


I first read about Internet of Things in contexts where it seemed to make a lot of sense: monitoring bridges and pipelines for integrity, "smart grid" coordination to optimize electrical grid services, networked sensors for better preventive maintenance and higher productivity from existing capital equipment (e.g. mining facility trucks, crushers, power plants...)

It's sad that the term has come to be dominated by ill-conceived consumer devices. I still have to remind myself that people are picturing "waffle iron that sends email" rather than "networked strain gages on bridges" when they savage IoT as a concept.


I've recently come to believe the same thing, always shrugged off the weird consumer IoT stuff because of the marketing, security issues and uselessness of a lot of the projected visions of the IoT future.

I lose confidence in my rejection of IoT when I think of smart energy grids and cooperating sensor-networks. It's a useful and welcome addition to our future.


An interesting read, a bit more inflammatory than I find informative but the points are pretty solid. Realizing a new device, especially a safety critical device, is something to be done with care (he rants about the Nest smoke detector). And it is "easy" to get irrationally exuberant about adding a computer to things that don't really need them. But I'd diverge a bit on claiming the practice flawed rather than simply the application of it in this case.


That's what bothers me about IoT projects - little consideration of failure modes. Nest thermostats have failed in modes that resulted in houses having no heat, and the user couldn't override it locally.[1]

Combine this with active attacks, and it looks really bad. Over three weeks after the attack, Maersk Lines is still struggling. Their big ports didn't achieve close to normal operation until about last Monday. Their big automated port in Rotterdam is running again, after being totally shut down for two weeks, but there's much more manual paperwork than usual, and billing is completely down, so they have zero revenue.[2] LA and Elizabeth NJ are finally back up. Some customer-facing functions were completely re-implemented with simpler web sites. Container tracking is still down. This is the world's largest shipping line, 24 days after the event.

[1] https://www.nytimes.com/2016/01/14/fashion/nest-thermostat-g... [2] http://www.maersk.com/~/media/20170629-operational-update/20...


The Nerves project (http://nerves-project.org) is an IoT OS which basically boots a device into Elixir, and benefits greatly from the battle-tested Erlang BEAM VM which is extremely fault-tolerant.

Do you have a link to the Maersk story? My dad used to work in shipping.

Also, my original Nests decided out of the blue one day that their power wires were both disconnected (which is impossible); is this bug related?


Also, my original Nests decided out of the blue one day that their power wires were both disconnected (which is impossible); is this bug related?

Probably. The problem resulted in Nest units not charging, but continuing to run until their battery was dead. Bringing them back up required a recharge via their USB port, followed by a firmware update to keep them from doing it again.


Both Erlang anr Elixir, while safer than C/C++, are still far from ideal for any critical system, since they lack any kind of static typing.


Citation needed, especially since 95% of the time your code is just dealing with some combination of lists (arrays in OO langs), maps, strings, ints, and symbols/atoms, and is easily unit-tested


It's interesting that you ask for a citation, and then immediately proceed to essentially make stuff up ("95%", "easily unit-tested"). Maybe you could provide a citation for these?

There is a long-running trend in the industrial sector to use statically-typed languages in large and/or critical projects, so there must be reasons for dynamic languages to be quasi non-existent there, and a change would require more than "just easily unit-test it", no? (To be fair Erlang is in fact the one notable exception, although it's -- as far as I know -- only used in the telecom industry)


The problem is that the device is the result of a flawed process. If the process isn't fixed to be more conservative, it will continue to produce flawed devices.


Given that even the 'premium' end of IoT things we have so far range from insecure to dangerous, I think the onus is on those attempting to sell them to demonstrate that their product isn't literally worse than useless.

After that, they can make the case of being better non-connected devices.

And, never say never, but there are some classes of devices of which I will never buy IoT versions, or if I have to, I will cripple the connectivity. Frankly, house locks are one of those - the dangers are far greater than any convenience for me. That reduces to one's threat models, of course, and I totally get that others may differ. (I live alone in an urban environment; having eight kids and house staff in the 'burbs would be entirely different.)


The key thing that this hits on is that it is bad engineering to replace a simple system with a vastly more complex system for the same task.

The correct approach, which the author points out, is to leave the simple system in place and then layer the more complex system with its additional features on top in a way that when it fails it does not break the base level system.

On the same topic, it might be good to keep POTS around, just in case we ever want to make an analog phone call again.


Agreed. A bit ranty - but still a good analysis of what constitutes bad design when "disrupting".


I read the article as:

"Move fast and break things" is great for a giant clickbait system, but in some industries it can kill people.


Move fast and break things is childish culture for childish projects. Think of all the people who live life like that.


"Don't move fast and break things when the things are people."


> I can't buy something that just has a $5 microprocessor with just enough intelligence to connect to the internet and send me an email or a push notification if the buzzer on the washer goes off.

1000000 x this.

The only thing I'd ever want a smart device to do is have a way to configure a mqtt or such endpoint that it should connect to to send/receive simple messages.

Just no consumer actually wants this. They want to be sold a box that connects to someone elses mqtt broker so it can work with an app that connects to someone elses server.

It's just another front on the war on general purpose computing.


> a $5 microprocessor with just enough intelligence to connect to the internet and send me an email or a push notification if the buzzer on the washer goes off

You have essentially described the Protect. Its brain is a cheap little MCU with 128kB RAM. The author, had he read an actual teardown [0], should have realized this. Instead, he made false and confusing claims about Linux.

Certainly other companies put grossly-overspecced hardware in their "Smart" devices, but Nest's Protect isn't one of them.

https://www.ifixit.com/Teardown/Nest+Protect+Teardown/20057


Linux specifically is wrong, but according to ifixit, there are two general purpose 32-bit processors, and two radio SoCs also containing 32-bit processors. Which to me still seems like a heck of a lot of CPUs and software for a smoke alarm.


"I need to preface this by saying that I have very little faith in the worldliness or general sense of Silicon Valley hardware engineers. I have seen a long history of extremely poor decisions from that part of industry, so I will assume they made all the worst decisions."

Old school engineers designed bridges, dams, nuclear plants, transformers, etc. They took a serious oath and got a license. Software "engineers" -- and let's be honest, most are closet software engineers, even the "hardware" ones -- are not cut from the same cloth in approaching problem solving, to the major detriment of society. I never trusted them.


It's about specs, budget and liability. Most people are engineering software tents and art pieces, because that is what the market wants and demands. They don't want to pay the price for non-tents. There is nothing wrong with software tents, you just have to realize what they are.


As one of those engineers, I often wonder. In my line of work, if you don't have a stamp, you're not an Engineer. While NCEES offers a PE in software, how many in software actually take the PE exam? Is Professional Engineer even a thing in Silicon Valley?


As a Silicon Valley software developer, I haven't seen many credentialed engineers. The vast majority of my colleagues in software have a degree or two in Computer Science, but no formal engineering certification. The few I've met I would describe as "incidentally" credentialed, meaning they went to a school (commonly Waterloo) where it was a requirement for graduation.


> to the major detriment of society. I never trusted them.

That's going a bit too far, though. Software has accomplished a lot for society, even if the methods are still not standardized, which absolutely is a problem, and prone to failure.


As much as I like slagging the Internet of Crap, smoke detectors are not a great example because smoke detectors are remarkably complicated.

They have to work. For years. They have to detect disgustingly small signals. Reliably. And, by the way, the chips have to survive in close proximity to a radiation source. And generate an alarm loud enough to wake the dead.

If I remember correctly, Qualcomm nee NXP nee Freescale nee Motorola used to keep an old 2" aluminum gate fab line around because every time somebody tried to make a new smoke detector chip something failed. So, they kept the fab open and simply printed money.


Nest is doubling down on the dumbness too. Their nest cams ship everything up to the cloud for processing!

What happened to computing at the edge? Are Apple (of all people) the only ones to still embrace this philosophy?


Processors are cheaper than ever, and there are multi-core multi-gigahertz processors with multiple gigabytes of RAM scattered throughout most homes but yes, as you point out, nobody wants to use them, they want to ship them up to "the cloud" or as we used to call it, "some servers on the Internet".

A charitable soul might say that by keeping local software simple, the end-user's devices don't need to be updated as often, while the software running on the cloud systems can be continually enhanced. A cynic like myself would say they prefer to use the cloud:

1) because it's easier to gather serious data about the users for later use/sale

2) because by tying users to an online service, they have something valuable ("2 million active users!") to offer when the startup inevitably gets bought out

3) because you can get away with less efficient code if you're running on a big timeshared server vs. on a small battery-powered device

4) because "the cloud!" is still an effective marketing gimmick


As an ios mobile dev, I get jealous of my fellow android dev's ability to roll out updates incrementally and to publish updates immediately. With a cloud model, you don't have the possibility of bricking your customer's device as a failure mode.

1) You can actually gather more data on the device vs just a remote service

2) SaaS moats are attractive from a business model perspective

3) TBH pushing compute on your customers devices is cheaper for the cloud owner, but harder to manage. On a per user basis most servers are actually VERY efficient. Most peoples smartphones are relative supercomputers, only some things will be very battery draining.

4) 'the cloud' does enable a lot of stuff that people don't want to manage manually themselves


Because you can offer processing-intensive features to customers who bought the device several years ago without requiring they buy new hardware! Look at the changes to what you can do with a Nest Cam, an Amazon Echo, and a Google Home compared to when the hardware was released. Particularly when you want to do various machine learn-y things on the data. Can't go retrofit 1M consumer devices with TPUs, but you can use them in the cloud.


Echo Dot is killing it with a $39 price point and doing all the intelligence in the cloud. And no hardware upgrades necessary.


By the way this use of "dumb" was as in "dumb terminal".


It's far easier and less risky to update, not to mention roll back, code on servers than it is on embedded devices. If you break the updater on a server, you can SSH in to fix it "manually" get a tech to replace the machine. If you break the updater (or WiFi driver, or bootloader, or...) on embedded devices in the field, you're up Product Recall Creek without a paddle.

In practice, this means that many companies prefer the less-risky approach which still lets them iterate rapidly on feature development, and fix bugs in production without weeks of QA.


A friend of mine likes to say, "The S in IoT is for security".


> Put it on top. Connect the I/O lines on your little PC to the output of the smoke detector chip. Let it do the heavy lifting, the stuff that you aren't sure you can do safely and which it has proven it can do for decades.

I worked on a network communications part (the little PC) of a device that controls train signals (the smoke detector chip). This is exactly how it's architected -- the safety-critical components are isolated from the non-safety-critical ones. The busybox linux board does not go in between the sensors and the vital control logic, but rather "on top"


Has anyone tried one of the newer revisions of the Protect and had any experiences, positive or negative, with it? I was initially excited when I heard about the product announcement. It seemed like everything I wanted in a smoke detector: remote monitoring, ability to silence, starting with voice before blaring and getting my adrenaline up, and, well, a smoke detector. But Amazon reviews and the infamous video later, I backed quickly away. I've been awoken by a false-positive smoke detector before and the experience is panic inducing on top of outright annoying.


I have a few of them. I like them, and haven't encountered any issues with them. I also had a first gen and didn't have any problems with that. That's not to discount that other people did have problems, or the fact that it's critically important for a safety device to be reliable. But I think that the problems are vastly overblown, and calling it a "fiasco" is hyperbole.


I just bought a house that has two 2nd gen protects along with traditional style detectors, So far so good! No false positives so far. I reset both detectors and had them go through their self test when we moved in and they both report being fully operational.

Neither of the protects are near the kitchen or regular smoke producing spaces though. However They are near bathrooms. I hear steam can set these off pretty easily but I have not had that experience.

I might buy more protects to replace the other detectors but I want to wait a bit and see if nest comes out with any other products.


I would test them with smoke just to see how each behaved. Try with smoke from a candle just put out, a piece of paper just put out, and a plastic wire just put out.


I have a couple. The one by my kitchen has far fewer false alarm issues than my prior smoke detector there did (but does still warn me when I'm smoking up the place).

I don't really care that much about the IoT aspects. It just seems to work well and it self-tests.


This is my exact experience as well. I got so sick of the normal cheap smoke detector blaring every time the oven went above 350 that I purchased a Protect simply for the ability to silence it before it blares at me. The connected portion is ok too, but secondary.


I have four of them, purchased from Amazon. One Nest Protect was DOA (wouldn't connect to wifi) and Amazon had a replacement to me next day. They work flawlessly: they work when I test them, don't make noise when they're not supposed to, they do a quick sound check once in a while, the quick "flash green when you turn off the lights at night" to confirm things are ok is great, and the motion-detecting night light feature is handy in my hallway, too.

I'm a big skeptic of most things IoT, and I agree with most points in the posted article, but my honest experience is that my Nest Protects work great.


I have 2 first-gen and 2 second-gen Protects in my home. I've never had any false alarms. The newer model has 4.7 stars (out of 5) on Amazon with over 5000 reviews, so I think my positive experience is the norm.


One thing I believe that's not being properly considered is the fact this is really a multilevel problem - if you want to make a smart IoT device you still want a reliable device. "On top of" is absolutely the best concept. Yes, the biggest computational bang for the buck comes from stuffing linux or something similar in, but who doesn't react with a smile when they see a multi year uptime reported - it's not rare, but it's not anywhere near a majority either - Start with the basic circuit, on top of that you add microcontroller logic to mediate between the higher level ARM SoC or whatnot and the base circuit, and just as importantly, you've also got a microcontroller based watch dog that will kick linux if it falls over. My experience in building this https://hackaday.io/project/21966-quamera is to never stop asking "Quis custodiet ipsos custodes" (anybody proffering analog solutions will be ignored, analog is hard and I am stupid)


I think it also requires a mindset that a lot of young engineers don't have IMHO. Very few of them will ever ship code that will run 24/7 without patching, and be stable out of the box.


This really worries me when they start writing critical applications. I did have to reboot an old school program at work today. Very dated piece of software, pretty sure it predates Windows 7.

It had been running, over a (fairly complicated) network connection for 303 days. I can't think of many modern pieces of software that do their job 24/7 for 303 days straight.

And I'm pretty sure the reason it was 303 days, was because I had to move that machine last year.


The whole patching culture (it being so easy) is just bogus. It is moral hazard through and through.


> First, begin with a smoke alarm. A tried and true design that you can buy for $10. Buy the exact parts that are already in use, and put them in your final product. The smoke sensor, the transistors, the through-hole resistors, the Fairchild IC, the 9V, the LED. Buy all of that and use it in your final product.

So... why didn't Nest do it this way?


Software people designing hardware.


But why not piggy-back atop of the legacy hardware? Doing that would save time and let the software people solve higher-level (and presumably more interesting) problems.

Besides, software people I know are way to lazy to go sticking their fingers in sockets.


Actual answer: Because you can't sell that to Google for $3.2 billion.


They were too busy disrupting the smoke sensor market


The mobile-friendly view, recommended on top of the article - is kind of a usability disaster: I really like the authors approach using plain HTML. But using the mobile-friendly view has many usability pitfalls like: No links, no ability to copy text, no ability to search on page and switching to another browser-tab will lose the current position where you were reading at. As said in the article, sometimes its better to keep things which just work and are used by the majority like that. Same goes for using some styles or a blog template to provide a a fully mobile friendly read for the user, not a compromise of two bad options (plain article or mobile-"friendly" view).


All of these are excellent arguments against most of the IoT practices, and not even including the security issues. All in all a great read[1].

These are all the reasons for why I'm not interested in IoT/embedded. I just don't have the hardware and low-level firmware knowhow and doing it the other way, starting with an OS and going down, seems (and evidently is) horrible. And I say this as a full-time software developer.

[1]: Even if there might be inaccuracies; the author admitted the possibility and the point stands.


Tldr; build an analog backup into a mission critical device.

The argument about not reinventing the wheel might be misinformed, because their custom chip could actually be a well known design with a tiny tweek.


I’m just wondering, as we shrink our gates for CPUs more and more (the gates are approaching 10nm soon), does it become less reliable? People always want the next fastest and most power efficient device, but surely we must be losing something with the shrink in CPU gate size! If that’s the case, are CPUs such as the Intel 286s more reliable?


The vast majority of issues will arise from software defects, not hardware defects.

So rather than switching to a 30 year old cpu, increasing the quality control and testing for the software would be the better aproach.

Also, as some peope in this thread have mentioned, a smart smoke detector should be designed in a way where if it fails it becomes a dumb smoke detector, but the actual smoke detection and warning should still work.


The vast majority of issues will arise from software defects, not hardware defects.

Exactly. The last thing I want in my house is a smoke alarm with software written by idiots like me.


A lot of the research into redundancy for computer systems has already been done. Check out this article from NASA when they began to implement multiple failsafe computers into their spacecraft.

https://history.nasa.gov/computers/Ch4-4.html


The best article I have read in a long time.


If you're writing an article, is it really that hard to do a little research? Most of this piece is just "I didn't even bother to look up a teardown photo, but here's how I assume they must have designed it".


He explicitly says that while he didn't tear down one himself, the folks at ifixit did and then he included a link to the photos.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: