
How to Make a Wireless Sensor Live for a Year on One Tiny Coin Cell Battery - adunk
http://www.thingsquare.com/blog/articles/sensortag-power/
======
Hasz
This isn't that crazy -- you can get something as available and low cost as an
ESP8266 ($1.72 in singles) down to ~.01mA to ~.005mA if you implement the
hardware watchdog timer, use sleep mode, remove extra LEDs, etc and keep the
external libraries to a minimum. There's a couple of other tricks, but it's a
good place to start.

By using a low leakage FET on the SDA SCL lines and sacrificing a bit of
speed, you can have up 127 i2c sensors that only use power when they measure.

A good low quintescent current buck boost regulator can squeze every last bit
of power for your chip even when the battery is basically at 0 capacity (2.7V
for Lion, .8V for Alkaline) and use very little power on it's own.

Dropping the system frequency can also improve performance.

~~~
sbierwagen
"This turnkey off the shelf product isn't all that hard to do yourself-- just
buy the components, make your own PCBs, do some surface mount soldering, then
write all your own software, including the wireless mesh network stack."

~~~
pjc50
I believe he's not suggesting writing your own stack for the ESP, just keep it
powered off 99.9% of the time. The rest is just basic electronics.

~~~
sbierwagen
The ESP does self-healing mesh stuff out of the box?

~~~
trelliscoded
[https://espressif.com/en/products/software/esp-
mesh/overview](https://espressif.com/en/products/software/esp-mesh/overview)

It's not exactly self-healing in the same way 802.11s is, iirc it's your basic
distance-vector routing protocol with all the usual problems that has.

------
gwbas1c
I don't understand the obsession with coin cell batteries. Why not use an AAA
or AA battery? There's clearly room for one. Coin cell batteries are a lot
more toxic than ordinary AAA or AA batteries.

In this case, they had to jump through quite a few hoops to get 1-2 years of
battery life. Is sampling every 30 minutes really practical in all
applications where someone would want to use a TI sensortag? It's probably
easier to just modify the sensortag to use a larger battery and get the
sampling rate you need. Depending on the situation, figuring out how to use a
C or D battery might be easier than all the compromises made.

My garage door openers have plenty of room for AAA batteries, or could use AA
with minor modifications; but they use coin cells. I just don't get it.

~~~
retSava
Coin cells are common in marketing of microcontrollers and the like and as
such is somewhat a mark of reference. There are also a good amount of devices
(eg BLE beacon trackers) where AA or AAA would be too large. But if you can,
use larger batteries :)

Coin cells aren't the best for this kind of system. When the current is a few
tens of mA (CPU + transceiver), even for a short while, the coin cell battery
internal resistance goes up, lowering the voltage over the terminals. This
means that as the battery gets older, the risk of a brown-out reset of the SOC
increases, even if there is still capacity to run a load with lower current
requirements.

You could to some extent reduce this by using capacitors to reduce peak
current over the battery, but caps have leakage current too, so the actual
lifetime could be less still. Damned if you do...

~~~
trelliscoded
I don't think I've ever seen any design that didn't have decoupling caps
across the power lines. I mean, the Murata parts I use have dozens of megaohms
of resistance at DC, and the chip is off 90% of the time anyway so there's no
energy loss keeping the E-field charges in the cap when the SoC is only
drawing whatever miniscule amount of power it needs when it's asleep.

Anyway, TI has stuff like this specifically designed for these applications,
so it's kind of a nonissue these days:

[http://www.ti.com/lit/ds/symlink/tps62730.pdf](http://www.ti.com/lit/ds/symlink/tps62730.pdf)

I think Mouser had them for like $0.50 or something.

------
ollybee
Related is this project for running LED's for a long time just brightly enough
to acts as markers:
[https://github.com/tedyapo/tritiled](https://github.com/tedyapo/tritiled)

~~~
hughes
That's a super interesting concept. I'm also surprised by the complexity. Why
does a low-power LED need a programmable microcontroller, multiple capacitors,
a pair of mosfets and an inductor?

~~~
pjc50
The inverter is doing all the work - the circuit is essentially a strange
shaped buck controller. What it's doing is building up energy in the inductor
and dumping it all at once into the LED.

The microcontroller is there as a programmable pulse-width generator. Yes, you
can do this with a NE555, at the cost of much more power and more components,
and they don't like low duty cycle and are less temperature-stable.

Having put the microcontroller on there, you must also put on its required
decoupling capacitors. You might be able to remove them in a situation this
simple with a bit more testing.

MOSFET Q1 is required because the peak current is quite high in the inverter,
more than 20ma. It also helps to steer the inductor current into the LED
rather than the protection diodes of the PIC. Note the LED is "upside down".

MOSFET Q2 and the resistor are a convenience to prevent the circuit being
destroyed by inserting the battery the wrong way up.

More at
[https://hackaday.io/project/11864-tritiled/log/62875-v22-rel...](https://hackaday.io/project/11864-tritiled/log/62875-v22-release)

------
Animats
Note that only the sensor runs for a year on a coin cell. The wireless mesh
router nodes are not mentioned.

Somebody has to be listening when coin-cell device wakes up. If the listening
device is always on, that's not a problem. That's not feasible for low-power
nodes. If listeners are only powered up intermittently, getting someone to
listen requires some kind of synchronization system.

I have a simple outdoor thermometer which contains a solution to this problem.
The sending unit goes outside, for best results sheltered but not near a wall
that leaks heat. The receiver goes inside and has an LCD display. Both sender
and receiver are battery-powered, and get over a year on two AA batteries.

The sender wakes up every 30 seconds or so and sends blind. The receiver wakes
up just before each expected transmission. When you replace the batteries, you
have to replace them in both units, and when the receiver is first powered up,
it stays powered for a while until it's heard the sender a few times and is in
sync.

~~~
Scaevolus
You can buy Sensortags with sub-1GHz RF transmitters, which have a
(theoretical) range of >500m, which should be enough for most applications to
not require a mesh network at all. They also accurate clocks (with a crystal)
to do time-rendezvous networking as well.

I've some intended for home monitoring-- they have "10 sensors including
support for light, digital microphone, magnetic sensor, humidity, pressure,
accelerometer, gyroscope, magnetometer, object temperature, and ambient
temperature" for $30.

~~~
trelliscoded
The microphone doesn't work, TI never released packaged firmware for it.
There's source code available that should work, but I haven't seen anyone on
the forums that managed to get it integrated into an application, much less
even build it.

------
cushychicken
Jack Ganssle wrote a much more thorough article on the same topic here, if
anyone is interested in the EE nitty gritty.

[http://www.ganssle.com/reports/ultra-low-power-
design.html](http://www.ganssle.com/reports/ultra-low-power-design.html)

------
peterburkimsher
Measuring once every 30 minutes, and waking up once every 5 minutes isn't a
lot. What surprises me most is that the sensor uses so much power.

My Casio CMD-40 smartwatch has a calculator and programmable TV remote. The
CR2032 coin cell inside is enough for it to last about 2 years. The display is
always-on, and I use the calculator often, although admittedly I rarely use
the IR transmitter. So apparently sensors use a lot more power than an LCD and
calculator.

~~~
zild3d
It's the radio that is consuming most of the power.

Even using your IR transmitter to send a short burst every 5 minutes wouldn't
be as much power as if you had an IR receiver that had to listen to incoming
signals. But still, start using the IR transmitter every 5 minutes and you'll
see a dent in battery life

~~~
trelliscoded
It's true, and radio technology has a lot of inefficiencies in the analog
bits. Digital RF techniques help, but the Rx amplifier and downconverters,
mixers, and stuff all lose a lot of energy in the process.

Also, the watch has a clock frequency measured in kilohertz, and the processor
is probably some dumb as rocks 8051 variant or something with a tiny amount of
static RAM. Big difference in power consumption compared to a 32-bit ARM core.

Cortex-M0s are pretty competitive with that power regime though these days,
especially if your application (like the watch) will allow you to run the M0
at sub-1MHz speeds.

------
pavel_lishin
Waking up only every 5 minutes limits the usefulness of this, doesn't it? It's
not exactly something that they can use to respond to random, arbitrary
events. Though if it's used to sample something like, "how many people are
currently waiting at this crosswalk", that's pretty good.

~~~
adunk
It is possible to react at any time because of an external hardware event,
such as a button press or a hardware sensor that triggers. Depending on the
sensor or external hardware, those events can be very cheap in terms of power
consumption. For example, a button switch can be completely passive and
consume power only when it triggers. Once it triggers, the microprocessor
wakes up and can decide to handle the event, potentially by using the radio to
send a message. Sending the message will consume much more power than just
turning on the microprocessor, so depending on the application logic the
microprocessor can simply choose to ignore the event and go back to sleep.

"Waking up" in the context of the article should be taken as "waking up an
turning on the radio to check if anything happens that I should be aware of".
This involves power-consuming activities, such as turning on and using the
radio, and cannot be completely passive. That's why it pays off to not do this
too often.

~~~
sbierwagen
Sure, the _local_ node can wake up at any time, but if it has to transmit over
the mesh network, then it has to wait for the remote node to wake up.

How does the mesh network handle deep sleep, anyway? Do they synchronize
clocks so they all wake up at the same time, or does a node just transmit
continuously until someone reasons?

~~~
adunk
The mesh nodes have to run either in always-on mode (which makes perfect sense
if they are powered by, say, a wall socket, which they often are) or in a
sampled listening mode. Sampled listening shaves some 98% off from the radio
duty cycle compared to always-on mode, but still is not enough for the mesh
nodes to run on coin cell batteries. Give them a larger pack of batteries and
they'll run for months though.

To send data to a node in sampled listening mode, the sender sends a string of
smaller wake-up packets that indicate when the sender intends to send the real
data packet. When the receiver picks up the wake-up packet, it knows when it
should wake up again to receive the data packet, so it can safely go back to
sleep again for a while. And if the sender knows that the receiver is in
always-on mode, because it is powered by a wall-socket, the sender can skip
sending those wake-up packets, saving a bit of power.

This requires clocks to be synchronized, but only loosely - millisecond
synchronization is enough. The trick is to strike the right balance between
communication responsiveness and power consumption.

~~~
sbierwagen
The BOM indicates that the clock oscillator is just a vanilla quartz crystal.
[http://www.ti.com/lit/df/swrr135b/swrr135b.pdf](http://www.ti.com/lit/df/swrr135b/swrr135b.pdf)
Does the firmware do any clever temperature compensation using the onboard
thermometer, or are you just sending lots of clock skew correction packets as
the mesh warms up and cools down diurnally?

~~~
adunk
For this level of time synchronization it is enough to send a few extra bytes
with synchronization information in each packet. If you'd be down at
microsecond-level synchronization, you probably need to do temperature
compensation - particularly in outdoor deployments where there can be a 40 C /
100 F difference between two nodes on a sunny winter day.

------
Pica_soO
The Trick is to get all your silicon dark, by having a external timer that
reactivates the chip - or have some ecu that supports a core-mode.

Another trick is to not use strong transmitters, and build a phased array from
a mesh-net of sensors.
[https://en.wikipedia.org/wiki/Phased_array](https://en.wikipedia.org/wiki/Phased_array)
This needs incredible sync timing, knowledge of positions and knowledge of
position of receiver.

~~~
retSava
Sounds like the phased array approach would be expensive, complex, and
incredibly hard to get working well in real deployments :)

~~~
aexaey
Technically, WiFi beam-forming, which is part of the standard since 802.11n
and is really taking off with high-end 802.11ac routers, is a phased array to
all intents and purposes.

The future is here.

------
reaperducer
People forget that once upon a time, radios didn't need batteries at all.

Even up until the 70's, companies made speakerphone amplifiers that didn't use
any electricity at all. I have one that I use with an iPod, and the sound
fills two very large rooms.

Once "transistor" became a marketing term, every industrial design started
with a battery, instead of finding creative ways to actually solve the problem
first.

Man, I'm old.

~~~
ramses0
Share links to some of these non-battery "electric" devices!

------
zild3d
I had a professor that did his thesis on a 10 year coin cell powered wireless
sensor. Although it was a transmit only protocol. Pretty well understood area
- send less data, put the system to sleep for as long as feasible.

[http://www.firner.com/research.html](http://www.firner.com/research.html)

~~~
adunk
Transmit-only mechanisms are definitely the best way to go to reach the
extreme minimum in low power levels. The problem with transmit-only systems is
that they are difficult to use in most real-world situations. Once you have
deployed a transmit-only sensor, you are stuck with the configuration that was
hard-coded into it. There is no way to change how often you want the sensor to
report, how and when it should trigger, and any other parameters and
requirements that may change as the project develops - encryption keys being
one example.

~~~
wlesieutre
The EnOcean protocol is a good example of this. Their sensors don't use
batteries at all, instead opting to charge a capacitor from small PV panels or
peltier generators.

[https://www.enocean.com/en/technology/energy-harvesting-
wire...](https://www.enocean.com/en/technology/energy-harvesting-wireless/)

[https://www.enocean.com/en/technology/energy-
harvesting/](https://www.enocean.com/en/technology/energy-harvesting/)

This is well suited to stuff like occupancy sensors in buildings. Time
resolution isn't as small as an always-powered system could be, but if your
use case for the data is "Turn off the lights if we haven't seen anyone move
for 30 minutes" then it doesn't much matter if you transmit every 5 minutes
instead of every 5 seconds.

------
jimmcslim
On the subject, worth reading Charlie Stross' blog post, 'How low (power) can
you go?' [1] and its potential impact on urban architecture and environments.

[1] [http://www.antipope.org/charlie/blog-static/2012/08/how-
low-...](http://www.antipope.org/charlie/blog-static/2012/08/how-low-power-
can-you-go.html)

~~~
trelliscoded
He puts out some interesting ideas, but he forgot that genome sequencing
requires some occasionally toxic regeants that are incompatible with bolting
DNA sensors all across town.

He's spot on about the data firehose problem, though. IoT data usage patterns
break all kinds of assumptions that network and storage engineers have about
how computers do stuff. The packet sizes are really small, so routers have to
work way harder, and all those tiny I/Os blow through IOPS way more than a
typical web app does.

And then there's the data formatting issues, like how searching this pile of
stuff makes you want to put it in columnar formats but time-series type
displays want row oriented formats. Oh, and the indexers. Ugh. A lot of good
indexing code has trouble keeping their internal trees balanced with all the
I/O going on at the same time the big data reporting stuff is going on a
spelunking expedition through your storage. There's probably some group at
Amazon going insane trying to come up with solutions to keep up with even the
limited IoT deployments out there already. I wouldn't be surprised if their
storage solutions are operating in a constant state of imminent capacity
breakdown of some kind.

------
tiku
There is a Dutch startup that uses wireless signals to extend battery life.
Much easier. They are rolling it out for Schiphol Airport if I recall
correctly. Schiphol had a fulltime employee just for changing batteries all
over..

------
mdergosits
Samsara [1] does 3+ years on a CR2032

[https://www.samsara.com/products/models/im31](https://www.samsara.com/products/models/im31)

------
remline
I would have expected a low power TI solution to use a MSP430 instead of the
ARM. It would be an interesting to see the tradeoffs for a comparable MSP430
solution.

~~~
trelliscoded
Some of the radio handling bits on MSP430 would be more complex due to the
lack of 32 bit support. There's also some third party firmware for e.g. the
lowpwan stuff that's only available as Cortex binaries.

TI keeps trying to push their MSP432 cortex-M4 series on me as a replacement
for MSP430. I guess they don't want to have customers go with a different
vendor just because TI doesn't have an ARM solution that supports whatever
code module or toolchain the customer wants to use.

~~~
remline
Thanks, I made a prototype a couple years back where the MSP430 was sufficient
and I wanted to move it to llvm to solve my only toolchain annoyance.. But I
have trouble seeing the point of investing time as ARM edges closer on the
battery advantage..

------
trelliscoded
I'm actually building a product that incorporates a CC1350 in it, which is the
variant that has both a 2.4 GHz and a proprietary sub-1GHz radio.

These chips are totally amazing. TI used every trick in the book to help the
designers save power. There's even a specialized coprocessor that uses special
low leakage silicon to handle some simple 16 bit I/O processing so you don't
have to wake up the main processor, and the main Cortex-M3 is pretty darn low
power already.

I am kind of annoyed that you have to call into their ROM routines to use the
radio, though. The datasheet is a snarky little tease that says ha ha, the
radio registers are somewhere around these addresses but we aren't going to
tell you what they do! Shhhh it's a secret! This is particularly annoying
because I could make my life a lot simpler if I could interface with the
Nordic shockburst protocols, and TI's firmware only knows how to speak TI
simplelink packet formats.

The thing that TI isn't talking about is that they did some stuff around the
voltage regulation that looks like they were planning on pushing some kind of
energy harvesting companion silicon. It's just a guess, but they might
announce something in the future along those lines. Whoever ends up first to
market with a productized energy harvesting IoT chip is going to print
bucketloads of money.

There's also a lot of stuff going on in the low power LTE space right now.
Altair semi announced the 1250 chip, which is -- get this -- power competitive
with the TI chip for low frequency telemetry applications. AT&T and Verizon
have already launched their cat-M1 networks and cat-NB1 is going to be out by
the beginning of next year. Those protocols use oversampling and a bunch of
other mad scientist RF hacks to get even more range than the LTE radio in your
phone. The bandwidth is really low, like modem or ISDN low, but there's a ton
of applications that can be implemented with literally one bit per day worth
of data. The chips are getting pumped out in volume now, and the cost regime
is actually causing revenue jitters at the carriers. They want to squeeze some
extra revenue out of LTE IoT space since it's basically free for them to
implement since the LTE IoT designers figured out how to reuse the otherwise
useless guard bands surrounding the real full-fat LTE signals your phone uses.

However, it's kind of a tough problem to figure out a cost model where
customers are paying something like $10 a year for maybe hundreds of devices
that are so cheap that the most expensive item on the board is the SIM card.
And even that's going away in the 2nd generation IoT chips now that someone
figured out how to get around the UICC requirement for provisioning by doing
something funny with a protected element in the main LTE radio.

Also check out my profile. It's not a joke, someone made a homebrew monitoring
solution out of someone else's IoT chip to keep track of cow farts because
they wanted to adjust the feed based on farts per minute or something, I
didn't really understand it.

~~~
keithnz
we do ( [http://outpostcentral.com](http://outpostcentral.com) /
[http://www.mywildeye.com/wildeye/](http://www.mywildeye.com/wildeye/) ) GSM
based monitoring. We had mesh tech at one stage, but pretty much stick to GSM
these days. LTE IoT bands are starting to be viable as it gets rolled out. Our
battery tech can get quite a number of years. SIM costs are tiny these days (
for us at least ).

~~~
dboreham
Devices don't need actual SIMs these days (e.g. my Samsung watch doesn't have
one). Do they still need the chip that's on the SIM? (hence no major cost
savings)?

~~~
trelliscoded
They never needed it from a technical perspective, it's there to make
provisioning easier. Because the carriers were pretty decoupled from the
baseband ecosystem, there wasn't a really good way to get the subscriber keys
into the radio hardware unless the carrier stuck their nose in the supply
chain somehow. Because baseband vendors all hate each other, there wasn't a
lot of interest in cooperating to create a standard to do something like that.
Plus, Gemalto kept trying to throw monkey wrenches into the committees by
doing some quite frankly pretty messed up things.

Regardless, profit is important and someone had to get squeezed out of the BoM
of these things being manufacturing in the billions. It finally happened
because of a combination of improvements to system on chip security, CMs
cracking down on security, and the carriers hauling their EDI based
provisioning systems into the 21st century. As a result, carriers can now
securely provision devices after manufacturing. AT&T is kind of a jerk about
it though, they do some stuff to the SIM when you onboard an unlocked device
sometimes to tie it to their network.

I'm still not entirely clear how the keys get distributed in a SIMless world
though. The process I use to onboard stuff into Verizon's IoT cloud involves
me uploading a CSV file to a server somewhere and making some REST requests,
but it's just IMSIs. The virtual UICC in the products like your watch works
pretty much the same way, according to my chip vendor. But they have a
multicarrier solution where the virtual UICC already knows about the major
networks, so maybe they're exchanging keys as part of the OTA activation flow
and securing it with a hardware key they give to the carriers in a HSM or
something. Or maybe the manufacturers are getting HSMs at the factory and
doing it right in the manufacturing process. I tried to wrap my head around
the 3GPP documents on the subject and I just got more and more confused.

There's definitely a vendor proprietary aspect to what's going on though,
because I can _see_ the OTA provisioning packets in QXDM and it says it
doesn't know how to decode them.

