By using a low leakage FET on the SDA SCL lines and sacrificing a bit of speed, you can have up 127 i2c sensors that only use power when they measure.
A good low quintescent current buck boost regulator can squeze every last bit of power for your chip even when the battery is basically at 0 capacity (2.7V for Lion, .8V for Alkaline) and use very little power on it's own.
Dropping the system frequency can also improve performance.
It's not exactly self-healing in the same way 802.11s is, iirc it's your basic distance-vector routing protocol with all the usual problems that has.
This is the Arduino business model-- easy introductory hardware for beginners. An actual hardware company wouldn't buy these for the same reason a router company wouldn't buy Cisco hardware and tape over the logos: there wouldn't be any margin.
I would characterize GGP's argument as more that this isn't unprecedented, and again, sure. Lithium cell-powered electronics have been around for decades, adding wireless isn't new. The point is that thingsquare is supposed to be easy. You buy their kit and press a button. You don't need to buy individual parts, or do assembly, or write glue code, or know what I2C is.
This is a common disease of the engineer's mind. "Why does anybody buy a car new? Just buy one used, and replace parts yourself when they break! Heck, why buy parts, when they're just metal and electronics? It should be easy enough to do your own aluminum casting, and wind your own electric motors." "You fool, you buy aluminum? Bauxite is everywhere! Just smelt it yourself, it's not hard." Etc etc.
My only argument for doing so is that everything is now extendable, whereas a "thingsquare' locks you into some particular engineer's design decisions that made sense then, but won't make sense for every use case now. Arduino sits right in the sweet spot of "easy to get off the ground" and extendable enough to use for all kinds of things", and that's largely why it is successful as a platform. This product sits so far off into "easy to get off the ground" territory that it misses too many use cases.
A board like this could be spun up for $25, shipping, parts, software, all in. I would be amazed (and very interested) if you could get an aluminum smelter up and running for $20.
All of these except the last are surprisingly easy these days. Digikey and Mouser stock virtually everything, custom PCBs are easy to order, and surface mount soldering can be done with a cheap hot air gun or toaster oven.
Arrow will ship parts overnight with a minimum order size of $20.
Aliexpress/eBay have made discrete semiconductors and passive parts nearly free.
Writing a bit of software is incredibly easy (versus what it used to be) with a good IDE and a solid toolchain.
It's truly a great age to be involved in hardware.
But yeah, I've made MCU boards in my toaster oven, it's fun times for little non-RF projects.
The several gigabytes of stuff TI makes you install just to do hello world on their simple link products is kind of nuts though. There's nothing simple about it tbh.
This area is in much need of improvement. Keil ARM-MDK, IAR, ect. aren't cheap.
With that said, of course the entire thing is still a lot of work and if the parts are super tiny... well...
The processor is down to typically a couple of uA waking every couple of seconds to see what needs doing.
See the log at the end of this as I was originally driving the power down:
Our target is in fact 2 years life, including driving the valve, off a pair of AA NiMH hybrids.
Slightly longer term our aim is to get off batteries entirely:
which is why I spent Tuesday at this meeting:
While that's fundamentally true, buck/boost converters also have a level of inefficiency. Old crappy ones are 80% efficient... good ones are 90% efficient, the best may be 95% or even 98% efficient.
"Squeezing out" the last few volts of a Lithium Ion or NiMH doesn't make much sense. There's probably less than 2% of power below 3V on a Lithium Ion, and less than 2% of power below 1.0V for a NiMH.
So the 98% efficient Buck Converter ends up lowering your life, because there's so little power between 0.7V and 1.0V or so.
If you can run a circuit without a converter, that would generally be superior. For example, an Arduino can run down to 1.8V. So a 2xAA battery or a 1x Lithium Ion can be run without a boost/buck converter, and really capture that last 2% or so of power (as opposed to losing 2% in the conversion process)
No, this will waste a lot of power in the high-supply-voltage region of the battery's discharge curve. There are three regimes to consider, for (say) the case of running 3.3V nominal parts off a 1S 3.7V Li-Ion cell. Other nominal supply and battery voltages are very similar, they just shift the locations of the regimes around a bit or introduce boost converters.
Regime 1: Battery voltage above system voltage. Here you want a high-efficiency buck converter. This minimizes the current you draw from the battery: instead of drawing (say) 10mA at 3.7V, you draw 10mA × 3.3/3.7 × 0.98 ~ 8.7mA at 3.7V. A buck converter is generally superior to a buck-boost as the peak efficiency is higher, and for this scenario boost functionality will never be necessary.
Regime 2: Battery voltage slightly above system voltage. In this regime the appropriate solution is a low-quiescent-current low-dropout linear regulator (aka LDO). This regime exists because of the finite efficiency of a switching regulator. With an imaginary 100% efficient switcher, you never need an LDO. The lower the draw on the system rail, the wider this regime gets. For micropower circuits (average draw in the single-digit microamps) one might always be here. For higher power draws (tens of milliamps), this regime might be too small to justify the added complexity of including an LDO and you just skip right over it.
Regime 3: Battery voltage at or below nominal system rail voltage but above minimum. The best thing to do here is absolutely nothing: just run the system directly off the battery. This is possible with a regulator with a bypass FET. Once the regulator goes into dropout, it just turns on the bypass transistor and connects things directly (with a small R_ds(on) drop of course). A few regulators have these built in. Other regulators are efficient enough in dropout that they don't actually benefit much from the bypass transistor. This also assumes you're near the end of the battery discharge curve. If you aren't, then you're potentially heading into boost or buck-boost territory, which I'm not going to cover here.
Deciding which of these three regimes are important enough to warrant the design time for optimizing is a key part of low-power circuit design.
As you've noted, a 3.7V Lithium Ion battery (which is 4.2V when fully charged) would waste a lot of both voltage and current (excess current typically runs with higher voltage) if feeding a battery directly.
But for a 3.3V part, you really want to be running a LiFePO4 Cell, with nominal voltage of 3.2V. The chemistry itself provides you the maximum efficiency, as opposed to building devices to convert a 3.7V part to 3.3V.
Compensating for the flaws of a slightly mismatched cell or battery is one approach. But I have my bets that selecting the proper battery to begin with is the optimal approach.
In any case, I think I can agree with you that using a standard 3.7V (4.2V when fully charged) battery on a 3.3V circuit... it probably would be most efficient to use a pure Buck Converter. 3.3/4.2V == 78% converters would gain you energy early in the charge, while near the end of the charge... 3.3/3.6V == 92%+ converters are still a net benefit.
My beloved dog will be flying in the cargo hold of an A380. I will be seated above in the passenger section. 15 hour flight to Australia.
I have to drop him off in his kennel four hours before departure at the freight office at the airport. They will take him to the plane.
After the flight, he will be picked up and driven about 10 miles away to a quarantine facility.
I would like to build something to put inside the kennel which will allow me to track its location at the freight office and then confirm he's been brought on board. Then at landing, I want to track the kennel from plane to quarantine. Note this is in a different country as departure.
I'm thinking of getting a used Nexus phone and a Project Fi SIM (so I can have data overseas too). However this will be rather clunky and I'd rather not put a bunch of batteries in his kennel for a flight.
Source? I have never seen anyone get an ESP8266 down to 5 uA sleep current on its own. You'd need to get a really low-leakage external FET for power sequencing.
I had a idea that i didnt follow through, but worked well (the code sucks, i know).
In this case, they had to jump through quite a few hoops to get 1-2 years of battery life. Is sampling every 30 minutes really practical in all applications where someone would want to use a TI sensortag? It's probably easier to just modify the sensortag to use a larger battery and get the sampling rate you need. Depending on the situation, figuring out how to use a C or D battery might be easier than all the compromises made.
My garage door openers have plenty of room for AAA batteries, or could use AA with minor modifications; but they use coin cells. I just don't get it.
Coin cells aren't the best for this kind of system. When the current is a few tens of mA (CPU + transceiver), even for a short while, the coin cell battery internal resistance goes up, lowering the voltage over the terminals. This means that as the battery gets older, the risk of a brown-out reset of the SOC increases, even if there is still capacity to run a load with lower current requirements.
You could to some extent reduce this by using capacitors to reduce peak current over the battery, but caps have leakage current too, so the actual lifetime could be less still. Damned if you do...
Anyway, TI has stuff like this specifically designed for these applications, so it's kind of a nonissue these days:
I think Mouser had them for like $0.50 or something.
CR2032s have a much higher surface area on the contacts, which is required to dissipate the heat from the weld as well as have more surface contact with the springs if you use a battery socket instead.
Not true. Take a look at an Amazon Dash teardown. Stock AA (or AAA? Can't remember which) welded into place.
Shelf life on lithium batteries is longer, and they don't leak, so you can ship and store sensortags with the battery preinstalled.
My impression is that it's way more of a logistics thing than a customer-value thing.
The microcontroller is there as a programmable pulse-width generator. Yes, you can do this with a NE555, at the cost of much more power and more components, and they don't like low duty cycle and are less temperature-stable.
Having put the microcontroller on there, you must also put on its required decoupling capacitors. You might be able to remove them in a situation this simple with a bit more testing.
MOSFET Q1 is required because the peak current is quite high in the inverter, more than 20ma. It also helps to steer the inductor current into the LED rather than the protection diodes of the PIC. Note the LED is "upside down".
MOSFET Q2 and the resistor are a convenience to prevent the circuit being destroyed by inserting the battery the wrong way up.
More at https://hackaday.io/project/11864-tritiled/log/62875-v22-rel...
Somebody has to be listening when coin-cell device wakes up. If the listening device is always on, that's not a problem. That's not feasible for low-power nodes. If listeners are only powered up intermittently, getting someone to listen requires some kind of synchronization system.
I have a simple outdoor thermometer which contains a solution to this problem. The sending unit goes outside, for best results sheltered but not near a wall that leaks heat. The receiver goes inside and has an LCD display. Both sender and receiver are battery-powered, and get over a year on two AA batteries.
The sender wakes up every 30 seconds or so and sends blind. The receiver wakes up just before each expected transmission. When you replace the batteries, you have to replace them in both units, and when the receiver is first powered up, it stays powered for a while until it's heard the sender a few times and is in sync.
@adunk says mesh nodes synchronize clocks so they're all listening at the same time, but intermediate nodes end up drawing enough power they can't actually run on a coin cell.
I've some intended for home monitoring-- they have "10 sensors including support for light, digital microphone, magnetic sensor, humidity, pressure, accelerometer, gyroscope, magnetometer, object temperature, and ambient temperature" for $30.
My Casio CMD-40 smartwatch has a calculator and programmable TV remote. The CR2032 coin cell inside is enough for it to last about 2 years. The display is always-on, and I use the calculator often, although admittedly I rarely use the IR transmitter. So apparently sensors use a lot more power than an LCD and calculator.
Even using your IR transmitter to send a short burst every 5 minutes wouldn't be as much power as if you had an IR receiver that had to listen to incoming signals. But still, start using the IR transmitter every 5 minutes and you'll see a dent in battery life
Also, the watch has a clock frequency measured in kilohertz, and the processor is probably some dumb as rocks 8051 variant or something with a tiny amount of static RAM. Big difference in power consumption compared to a 32-bit ARM core.
Cortex-M0s are pretty competitive with that power regime though these days, especially if your application (like the watch) will allow you to run the M0 at sub-1MHz speeds.
"Waking up" in the context of the article should be taken as "waking up an turning on the radio to check if anything happens that I should be aware of". This involves power-consuming activities, such as turning on and using the radio, and cannot be completely passive. That's why it pays off to not do this too often.
How does the mesh network handle deep sleep, anyway? Do they synchronize clocks so they all wake up at the same time, or does a node just transmit continuously until someone reasons?
To send data to a node in sampled listening mode, the sender sends a string of smaller wake-up packets that indicate when the sender intends to send the real data packet. When the receiver picks up the wake-up packet, it knows when it should wake up again to receive the data packet, so it can safely go back to sleep again for a while. And if the sender knows that the receiver is in always-on mode, because it is powered by a wall-socket, the sender can skip sending those wake-up packets, saving a bit of power.
This requires clocks to be synchronized, but only loosely - millisecond synchronization is enough. The trick is to strike the right balance between communication responsiveness and power consumption.
Another trick is to not use strong transmitters, and build a phased array from a mesh-net of sensors.
This needs incredible sync timing, knowledge of positions and knowledge of position of receiver.
The future is here.
Even up until the 70's, companies made speakerphone amplifiers that didn't use any electricity at all. I have one that I use with an iPod, and the sound fills two very large rooms.
Once "transistor" became a marketing term, every industrial design started with a battery, instead of finding creative ways to actually solve the problem first.
Man, I'm old.
This is well suited to stuff like occupancy sensors in buildings. Time resolution isn't as small as an always-powered system could be, but if your use case for the data is "Turn off the lights if we haven't seen anyone move for 30 minutes" then it doesn't much matter if you transmit every 5 minutes instead of every 5 seconds.
He's spot on about the data firehose problem, though. IoT data usage patterns break all kinds of assumptions that network and storage engineers have about how computers do stuff. The packet sizes are really small, so routers have to work way harder, and all those tiny I/Os blow through IOPS way more than a typical web app does.
And then there's the data formatting issues, like how searching this pile of stuff makes you want to put it in columnar formats but time-series type displays want row oriented formats. Oh, and the indexers. Ugh. A lot of good indexing code has trouble keeping their internal trees balanced with all the I/O going on at the same time the big data reporting stuff is going on a spelunking expedition through your storage. There's probably some group at Amazon going insane trying to come up with solutions to keep up with even the limited IoT deployments out there already. I wouldn't be surprised if their storage solutions are operating in a constant state of imminent capacity breakdown of some kind.
TI keeps trying to push their MSP432 cortex-M4 series on me as a replacement for MSP430. I guess they don't want to have customers go with a different vendor just because TI doesn't have an ARM solution that supports whatever code module or toolchain the customer wants to use.
These chips are totally amazing. TI used every trick in the book to help the designers save power. There's even a specialized coprocessor that uses special low leakage silicon to handle some simple 16 bit I/O processing so you don't have to wake up the main processor, and the main Cortex-M3 is pretty darn low power already.
I am kind of annoyed that you have to call into their ROM routines to use the radio, though. The datasheet is a snarky little tease that says ha ha, the radio registers are somewhere around these addresses but we aren't going to tell you what they do! Shhhh it's a secret! This is particularly annoying because I could make my life a lot simpler if I could interface with the Nordic shockburst protocols, and TI's firmware only knows how to speak TI simplelink packet formats.
The thing that TI isn't talking about is that they did some stuff around the voltage regulation that looks like they were planning on pushing some kind of energy harvesting companion silicon. It's just a guess, but they might announce something in the future along those lines. Whoever ends up first to market with a productized energy harvesting IoT chip is going to print bucketloads of money.
There's also a lot of stuff going on in the low power LTE space right now. Altair semi announced the 1250 chip, which is -- get this -- power competitive with the TI chip for low frequency telemetry applications. AT&T and Verizon have already launched their cat-M1 networks and cat-NB1 is going to be out by the beginning of next year. Those protocols use oversampling and a bunch of other mad scientist RF hacks to get even more range than the LTE radio in your phone. The bandwidth is really low, like modem or ISDN low, but there's a ton of applications that can be implemented with literally one bit per day worth of data. The chips are getting pumped out in volume now, and the cost regime is actually causing revenue jitters at the carriers. They want to squeeze some extra revenue out of LTE IoT space since it's basically free for them to implement since the LTE IoT designers figured out how to reuse the otherwise useless guard bands surrounding the real full-fat LTE signals your phone uses.
However, it's kind of a tough problem to figure out a cost model where customers are paying something like $10 a year for maybe hundreds of devices that are so cheap that the most expensive item on the board is the SIM card. And even that's going away in the 2nd generation IoT chips now that someone figured out how to get around the UICC requirement for provisioning by doing something funny with a protected element in the main LTE radio.
Also check out my profile. It's not a joke, someone made a homebrew monitoring solution out of someone else's IoT chip to keep track of cow farts because they wanted to adjust the feed based on farts per minute or something, I didn't really understand it.
Regardless, profit is important and someone had to get squeezed out of the BoM of these things being manufacturing in the billions. It finally happened because of a combination of improvements to system on chip security, CMs cracking down on security, and the carriers hauling their EDI based provisioning systems into the 21st century. As a result, carriers can now securely provision devices after manufacturing. AT&T is kind of a jerk about it though, they do some stuff to the SIM when you onboard an unlocked device sometimes to tie it to their network.
I'm still not entirely clear how the keys get distributed in a SIMless world though. The process I use to onboard stuff into Verizon's IoT cloud involves me uploading a CSV file to a server somewhere and making some REST requests, but it's just IMSIs. The virtual UICC in the products like your watch works pretty much the same way, according to my chip vendor. But they have a multicarrier solution where the virtual UICC already knows about the major networks, so maybe they're exchanging keys as part of the OTA activation flow and securing it with a hardware key they give to the carriers in a HSM or something. Or maybe the manufacturers are getting HSMs at the factory and doing it right in the manufacturing process. I tried to wrap my head around the 3GPP documents on the subject and I just got more and more confused.
There's definitely a vendor proprietary aspect to what's going on though, because I can see the OTA provisioning packets in QXDM and it says it doesn't know how to decode them.