Don't design a computer that is a smoke detector, design a smoke detector that happens to also have a computer. A kernel panic or complete loss of computing should leave behind a traditional smoke detector in a fancy looking case.
The thermostat itself had a battery backup, so it was trying to turn the heat on as my house cooled off. But the controller and ignitor had no backup. I couldn't even use a match to light the thing manually, because there was no way to open the gas valve.
I understand the safety implications of adding a manual valve and spark ignitor to a furnace, but letting my pipes freeze (and me too) also has safety implications. Besides, there's probably already a thermo-mechanical safety mechanism that would slam the valve shut if the furnace gets too hot.
Luckily, the power came back on in a few days, before my house got colder than light-jacket conditions. No frozen pipes.
Now this can work, indeed as a engineer in SaaS, it's my job to make it work. But see how to make it work you have to hire engineers to nurse the all those systems along.
Then you can aim for "comfortable" with your "smart" one and "safe" with your real one.
But then I don't really know what it's like to own pets, and I don't know how housing in New Orleans works either.
Which really makes me think what happens in a Tesla when the computer shits the bed...
I certainly fell for a few false ones in the past. Sometimes only deeper meaning is that someone's trying to manipulate you.
But that's just a problem with analogies, they trigger different ideas in different people's heads.
Mitch. One of the greats. Uncopyable.
When it comes to capabilities, the people arguing for entire OSs generally win. Go with Linux, get an HTTPS stack, get XML parsing, get JSON parsing, don't worry so much about RAM. And heck, the price different between a microprocessor and a microcontroller is minimal now days, a couple bucks in large quantity (especially if you have to outfit the microcontroller with more chips to add functionality).
The microcontroller people are right in one way though. Their products are simpler. Battery life will be better (it is easier to control power usage when you start with nothing and build up, then trying to minimize an entire OS down) and the code will be purpose built with less complexity to it.
The ESP8266, ESP32, and Arduino are one view of things, Raspberry Pi and "Things with Android Running On Them" are another.
Of course against all of this, the analog EEs laugh at us and continue to make things that just /work/.
The board ended up with 1500 components, it took 1 extra year to get just half of the features kinda working, the rest of the features was discarded. And the power budget was blown, which probably led to interesting negotiations with other involved bodies, but I had already quit at that time.
Obviously, 'embedded' meant very different things for me and for him.
Some CEOs just try to postpone the day of deafeat to infinity by allowing customers to glue stuff to the device. Its like the captain of a sinking ship deciding to freeze all the water in it - making it float by the virtue of beeing a iceberg with scrap on the side.
It ponzi-scheme kinda works for a hellish year, and then it usually implodes under the weight of a crust of hacks.
At the end of the day, it has nothing to do with what's good design or with which generation's philosophy we're going on. It has a bit to do with bragging rights but also any company selling a trivial doo-dad has a lot to gain in the vague importance scale by including including a whole OS with their doo-dad. They don't quite know what, they may not quite be thinking "now at least we're as important as a Russian mafioso running a giant botnet" but inherently the management has some kind of dream that's more likely to be satisfied by a million Ubuntu instances than a million light bulbs.
Just for example, they haven't got the gall or tech to put push ads from your light bulbs YET but I'd guess that's one of the things floating around some board tables somewhere ("but how to offer value with hmm..").
Those urges won't go away. They care more about themselves than the public or their customers? Who'd have thunk.
Welcome to the "everything's adversarial" era...
Not where I come from. I have to fight to get 32-bit microcontrollers accepted. And there is zero reason to use 32-bit controllers over 8-bit controllers on new projects nowadays. Zero. None.
> And heck, the price different between a microprocessor and a microcontroller is minimal now days, a couple bucks in large quantity (especially if you have to outfit the microcontroller with more chips to add functionality).
Not even close. An ARM Cortex M0 is 70 cents. The cheapest ARM Cortex A7 is $10.
And the point of the microcontroller is that functionality is part of the part and doesn't require external chips (or at least a very minimal number).
Uh, yeah, no. LOL. We just had an entire article dedicated to the weird and wacky ways that things can fail in the analog world (corrosion, light, standing waves, etc.).
There is a reason everybody moves as much of analog and RF circuitry into digital as they can as soon as they can.
Analog and RF are notoriously unreliable and disgustingly difficult to debug. Once things hit digital land, they become repeatable and debuggable.
Hunting down a stack smasher is nothing compared to hunting down some screwball oscillation in an amplifier.
And once you have a product with a serial or USB interface, some manager is sure to ask if you can add a LAN connection.
There's always a certain cheezyness to your tools when doing what I think of as real embedded work. Probably the only nice part of it is the basic GNU compiler toolchain (if present). The rest -- including any high level development tools -- are either closed source, vendor specific stuff of varying quality or low quality half dead open-source projects.
You can certainly write software that emulates a microcontroller on the Pi, but the problem is that it can't guarantee timing, since it's running on Linux fundamentally, and other processes can pre-empt it.
For certain things it doesn't matter as much, such as for a door lock or something like that. Then you gotta worry about power consumption, since running cables to your lock is problematic, and it has to run on batteries for months at a time.
That's a reason why realtime controllers can't run on Linux - if you were controlling an industrial robot that controls welding temperatures and times, even like 300 ms off can drastically affect the finished product with fit and finish and metal heat treating.
"This whole batch of parts worth hundreds of thousands of dollars went bad!"
"Well, the network is down, and the robot controller was busy trying to reconnect to the network, so that's why the welds were formed over 300 ms more than they should have been."
That can't cut it in industrial applications.
It might be hard to prove that someone has hacked your smart tv and is causing you monetary damages but all it takes is for one smart toaster to burn down a house to set a precedent for liability.
The question the manufacture needs to determine is the risk / reward value for their device and what could happen if it's hacked.
"For the record, I read several stories about the Nest Protect going into permanent alarm, and you know what my hunch is? The same thing I always assume: "Dumb Linux crap." The culprit was probably some shell script that opened /opt/smoked/detect and output 1 to it and then left the file locked so nothing else could touch it or forgot to delete a pid file or whatever. This is what I always assume when I read about Linux integrated devices screwing up, and on the occasions I've actually heard what the cause was I usually end up right."
As I mention in a separate comment, iFixit's teardown identifies the 100Mhz, 128kB RAM MCU which is the Protect's brain. Such a device does not run Linux. Had the author done any research at all, they would have found this.
Instead, they waded into an unfamiliar topic with the knowledge of a novice and the arrogance of an expert - precisely the accusation they level against Nest.
"The Smart device engineer does not begin by disassembling ten smoke alarms to see how they work. They do not begin by reading papers written by fire chiefs and scientists. They do not look at the statistics on fire-related deaths with and without smoke alarms of different eras (although the marketing department director does)"
All this due diligence and much more was done. The author's lazy speculation insults me and my former co-workers.
This blog post gets lots more important stuff wrong. Suffice it to say that today the Protect is very well-rated by consumers and safety professionals.
The IoT field as a whole is a mess, and deserves much of the author's criticism. Nest, specifically, does not.
On the plus side it still works as a dumb disconnected thermostat, so thank you for that (seriously).
There are real disadvantages of existing hardware.
Take a lock, if you have a cilinder because you need a physical key you have an attack point. Add electronics on top and you only expand the attack service. Get rid of the exposed cilinder! Don't make a plane that flaps like a bird.
Another argument. As an electrical engineer most of my graduates where reinventing wheels with FPGAs. That something is hardware does not mean it cannot contain bugs, I assure you. :-)
Electronics itself improves as well. From lifetimes to power consumption. Everybody knows you should replace batteries of smoke alarms, but people don't always care: turn off the alarm and forget to place a new. There are many ways in which safety can be improved: test the working regularly, have notification messages go to the user if it is not, don't rely on battery only, but also connect the grid, make sure that false alarms can be quickly disabled, have campaigns so that everyone has them, make installation easier, etc. I wouldn't be so sure as the writer that Nest has a netto negative effect because they use a microcontroller in the loop.
I don't have any connection with Nest Protect and don't own one, but the fact that the Protect does not use an ionisation sensor (they suck because they suck at detecting fires early, it has nothing to do with the harmless radioactivity) and used a dual-wavelength photoelectric sensor gives them high marks in my book.
All the hardcore smoke detection equipment -- both the aspirating kinds ("VESDA") and the open-space beam/imaging kinds -- use dual-wavelength sensing. I cannot speak to all the "smart"/"connected" aspects of the device but the smoke sensor seems far better than that of every other residential smoke detector, and it would be very nice if other residential smoke detectors ditched the ionisation sensor and went to a dual-wavelength design.
I agree that there's a lot of marketing Bullshytt (to steal from neal stephenson) regarding IoT, but when we get to the root of what the words really mean (connecting stuff to the internet), I think it's the natural progression of technology. Why wouldn't you connect your field of oil pumps' valve readings to the internet (as well as having the manual needle-gauge backup) so you don't have to have a dude drive out every day to take readings? Why wouldn't you connect your thermostat to the internet so you can override the schedule when it turns out you need to stay late at work? Etc.
I don't get why everyone has to lump together the marketing bullshit that obviously is overstated with IoT, and then go "hence, IoT will fail and is crap."
The Internet connectivity is not the real problem, but it is the greatest show of the problem in commodity hardware (and this does apply to a lot of computers, particularly phones - the more disposable the device the more prevalent the behavior) that manufacturers believe it is ethical to produce a product nobody can maintain or repair on their own. These companies also have limited to no long term plan for support. The problem is that general purpose computers are not suited to fire and forget when interacting at levels of abstraction as large as TCP networking.
I don't think this will ever be solved, because average consumers are incredibly ignorant of why it is a problem to begin with, and thus they do not care if they don't have rights to repair the software running their devices. At least until something goes wrong, then you get isolated outrage that fades away when the computer is too "complicated" or "magic" to understand and thus the ignorance drives people to simply assume it isn't a solvable problem, when it really is - we have communities maintaining support for chipsets manufactured in the 70s with modern software, but they can only emerge and grow when devices are either popular enough to incentivize the massive effort require to reverse engineer the software entirely or have publicized documents on how to modify them.
As long as it isn't generally considered unethical to provide black box software running the hardware you buy (when likewise the design documents of said hardware are also often proprietary trade secrets) then nothing will change. People are offended if they are sold a car whose engine they cannot open and whose parts they cannot replace (up right until said car has a computer in it, at which point all expectations of right to repair go out the window because the computers are "magic" in ways belts and radiators are not) but they happily buy phones they can never disassemble.
...My home automation software has bugs, and I've had plenty of problems with it. But I've yet to have a problem with my thermostat, lights, or smoke detectors. Despite all of them being linked to it.
It's sad that the term has come to be dominated by ill-conceived consumer devices. I still have to remind myself that people are picturing "waffle iron that sends email" rather than "networked strain gages on bridges" when they savage IoT as a concept.
I lose confidence in my rejection of IoT when I think of smart energy grids and cooperating sensor-networks. It's a useful and welcome addition to our future.
Combine this with active attacks, and it looks really bad. Over three weeks after the attack, Maersk Lines is still struggling. Their big ports didn't achieve close to normal operation until about last Monday. Their big automated port in Rotterdam is running again, after being totally shut down for two weeks, but there's much more manual paperwork than usual, and billing is completely down, so they have zero revenue. LA and Elizabeth NJ are finally back up. Some customer-facing functions were completely re-implemented with simpler web sites. Container tracking is still down. This is the world's largest shipping line, 24 days after the event.
Do you have a link to the Maersk story? My dad used to work in shipping.
Also, my original Nests decided out of the blue one day that their power wires were both disconnected (which is impossible); is this bug related?
Probably. The problem resulted in Nest units not charging, but continuing to run until their battery was dead. Bringing them back up required a recharge via their USB port, followed by a firmware update to keep them from doing it again.
There is a long-running trend in the industrial sector to use statically-typed languages in large and/or critical projects, so there must be reasons for dynamic languages to be quasi non-existent there, and a change would require more than "just easily unit-test it", no? (To be fair Erlang is in fact the one notable exception, although it's -- as far as I know -- only used in the telecom industry)
After that, they can make the case of being better non-connected devices.
And, never say never, but there are some classes of devices of which I will never buy IoT versions, or if I have to, I will cripple the connectivity. Frankly, house locks are one of those - the dangers are far greater than any convenience for me. That reduces to one's threat models, of course, and I totally get that others may differ. (I live alone in an urban environment; having eight kids and house staff in the 'burbs would be entirely different.)
The correct approach, which the author points out, is to leave the simple system in place and then layer the more complex system with its additional features on top in a way that when it fails it does not break the base level system.
On the same topic, it might be good to keep POTS around, just in case we ever want to make an analog phone call again.
"Move fast and break things" is great for a giant clickbait system, but in some industries it can kill people.
1000000 x this.
The only thing I'd ever want a smart device to do is have a way to configure a mqtt or such endpoint that it should connect to to send/receive simple messages.
Just no consumer actually wants this. They want to be sold a box that connects to someone elses mqtt broker so it can work with an app that connects to someone elses server.
It's just another front on the war on general purpose computing.
You have essentially described the Protect. Its brain is a cheap little MCU with 128kB RAM. The author, had he read an actual teardown , should have realized this. Instead, he made false and confusing claims about Linux.
Certainly other companies put grossly-overspecced hardware in their "Smart" devices, but Nest's Protect isn't one of them.
Old school engineers designed bridges, dams, nuclear plants, transformers, etc. They took a serious oath and got a license. Software "engineers" -- and let's be honest, most are closet software engineers, even the "hardware" ones -- are not cut from the same cloth in approaching problem solving, to the major detriment of society. I never trusted them.
That's going a bit too far, though. Software has accomplished a lot for society, even if the methods are still not standardized, which absolutely is a problem, and prone to failure.
They have to work. For years. They have to detect disgustingly small signals. Reliably. And, by the way, the chips have to survive in close proximity to a radiation source. And generate an alarm loud enough to wake the dead.
If I remember correctly, Qualcomm nee NXP nee Freescale nee Motorola used to keep an old 2" aluminum gate fab line around because every time somebody tried to make a new smoke detector chip something failed. So, they kept the fab open and simply printed money.
What happened to computing at the edge? Are Apple (of all people) the only ones to still embrace this philosophy?
A charitable soul might say that by keeping local software simple, the end-user's devices don't need to be updated as often, while the software running on the cloud systems can be continually enhanced. A cynic like myself would say they prefer to use the cloud:
1) because it's easier to gather serious data about the users for later use/sale
2) because by tying users to an online service, they have something valuable ("2 million active users!") to offer when the startup inevitably gets bought out
3) because you can get away with less efficient code if you're running on a big timeshared server vs. on a small battery-powered device
4) because "the cloud!" is still an effective marketing gimmick
1) You can actually gather more data on the device vs just a remote service
2) SaaS moats are attractive from a business model perspective
3) TBH pushing compute on your customers devices is cheaper for the cloud owner, but harder to manage. On a per user basis most servers are actually VERY efficient. Most peoples smartphones are relative supercomputers, only some things will be very battery draining.
4) 'the cloud' does enable a lot of stuff that people don't want to manage manually themselves
In practice, this means that many companies prefer the less-risky approach which still lets them iterate rapidly on feature development, and fix bugs in production without weeks of QA.
I worked on a network communications part (the little PC) of a device that controls train signals (the smoke detector chip). This is exactly how it's architected -- the safety-critical components are isolated from the non-safety-critical ones. The busybox linux board does not go in between the sensors and the vital control logic, but rather "on top"
Neither of the protects are near the kitchen or regular smoke producing spaces though. However They are near bathrooms. I hear steam can set these off pretty easily but I have not had that experience.
I might buy more protects to replace the other detectors but I want to wait a bit and see if nest comes out with any other products.
I don't really care that much about the IoT aspects. It just seems to work well and it self-tests.
I'm a big skeptic of most things IoT, and I agree with most points in the posted article, but my honest experience is that my Nest Protects work great.
It had been running, over a (fairly complicated) network connection for 303 days. I can't think of many modern pieces of software that do their job 24/7 for 303 days straight.
And I'm pretty sure the reason it was 303 days, was because I had to move that machine last year.
So... why didn't Nest do it this way?
Besides, software people I know are way to lazy to go sticking their fingers in sockets.
These are all the reasons for why I'm not interested in IoT/embedded. I just don't have the hardware and low-level firmware knowhow and doing it the other way, starting with an OS and going down, seems (and evidently is) horrible. And I say this as a full-time software developer.
: Even if there might be inaccuracies; the author admitted the possibility and the point stands.
So rather than switching to a 30 year old cpu, increasing the quality control and testing for the software would be the better aproach.
Also, as some peope in this thread have mentioned, a smart smoke detector should be designed in a way where if it fails it becomes a dumb smoke detector, but the actual smoke detection and warning should still work.
Exactly. The last thing I want in my house is a smoke alarm with software written by idiots like me.
The argument about not reinventing the wheel might be misinformed, because their custom chip could actually be a well known design with a tiny tweek.