Hacker News new | comments | show | ask | jobs | submit login
How to Make a Wireless Sensor Live for a Year on One Tiny Coin Cell Battery (thingsquare.com)
153 points by adunk 6 months ago | hide | past | web | favorite | 86 comments



This isn't that crazy -- you can get something as available and low cost as an ESP8266 ($1.72 in singles) down to ~.01mA to ~.005mA if you implement the hardware watchdog timer, use sleep mode, remove extra LEDs, etc and keep the external libraries to a minimum. There's a couple of other tricks, but it's a good place to start.

By using a low leakage FET on the SDA SCL lines and sacrificing a bit of speed, you can have up 127 i2c sensors that only use power when they measure.

A good low quintescent current buck boost regulator can squeze every last bit of power for your chip even when the battery is basically at 0 capacity (2.7V for Lion, .8V for Alkaline) and use very little power on it's own.

Dropping the system frequency can also improve performance.


"This turnkey off the shelf product isn't all that hard to do yourself-- just buy the components, make your own PCBs, do some surface mount soldering, then write all your own software, including the wireless mesh network stack."


I believe he's not suggesting writing your own stack for the ESP, just keep it powered off 99.9% of the time. The rest is just basic electronics.


The ESP does self-healing mesh stuff out of the box?


https://espressif.com/en/products/software/esp-mesh/overview

It's not exactly self-healing in the same way 802.11s is, iirc it's your basic distance-vector routing protocol with all the usual problems that has.


I think the GP comment was intended in the sense that it's trivial for someone who does this for a living as an employee of a hardware product company to get this sort of power-draw out of bog-standard components; they weren't suggesting that it's an easy DIY project.


Well sure, a hardware company isn't going to buy this. Check their pricing page.

http://www.thingsquare.com/start/

This is the Arduino business model-- easy introductory hardware for beginners. An actual hardware company wouldn't buy these for the same reason a router company wouldn't buy Cisco hardware and tape over the logos: there wouldn't be any margin.

I would characterize GGP's argument as more that this isn't unprecedented, and again, sure. Lithium cell-powered electronics have been around for decades, adding wireless isn't new. The point is that thingsquare is supposed to be easy. You buy their kit and press a button. You don't need to buy individual parts, or do assembly, or write glue code, or know what I2C is.

This is a common disease of the engineer's mind. "Why does anybody buy a car new? Just buy one used, and replace parts yourself when they break! Heck, why buy parts, when they're just metal and electronics? It should be easy enough to do your own aluminum casting, and wind your own electric motors." "You fool, you buy aluminum? Bauxite is everywhere! Just smelt it yourself, it's not hard." Etc etc.


I wasn't trying to suggest breaking out the sputtering chamber, but anyone with a decent background in physics and basic algebra could learn and build something similar.

My only argument for doing so is that everything is now extendable, whereas a "thingsquare' locks you into some particular engineer's design decisions that made sense then, but won't make sense for every use case now. Arduino sits right in the sweet spot of "easy to get off the ground" and extendable enough to use for all kinds of things", and that's largely why it is successful as a platform. This product sits so far off into "easy to get off the ground" territory that it misses too many use cases.

A board like this could be spun up for $25, shipping, parts, software, all in. I would be amazed (and very interested) if you could get an aluminum smelter up and running for $20.


> just buy the components, make your own PCBs, do some surface mount soldering, then write all your own software

All of these except the last are surprisingly easy these days. Digikey and Mouser stock virtually everything, custom PCBs are easy to order, and surface mount soldering can be done with a cheap hot air gun or toaster oven.


You're exactly right. You can get a 10cm by 10cm 2 layer board with 6mil tolerances for $12 shipped, from china.

Arrow will ship parts overnight with a minimum order size of $20.

Aliexpress/eBay have made discrete semiconductors and passive parts nearly free.

Writing a bit of software is incredibly easy (versus what it used to be) with a good IDE and a solid toolchain.

It's truly a great age to be involved in hardware.


I agree... as long as radios aren't involved. Getting parts with the necessary tolerances in low quantities and dealing with the board layout is kind of expensive. The analyzer you need to use to figure out why the radio is acting funny is pretty expensive too. I don't know how I could have done my senior radio project without access to my uni's pimped out RF lab.

But yeah, I've made MCU boards in my toaster oven, it's fun times for little non-RF projects.

The several gigabytes of stuff TI makes you install just to do hello world on their simple link products is kind of nuts though. There's nothing simple about it tbh.


> a good IDE and a solid toolchain

This area is in much need of improvement. Keil ARM-MDK, IAR, ect. aren't cheap.


But platform.io is good and cheap. You can do most things with the free version and the pro version with remote debugging is pretty reasonable. The plus is, even though it is based around Atom, it's not a 20G download, as the vendor IDEs are.


Just a note for anyone else who's interested. The site is platformio.org, not platform.io.


GCC for ARM is pretty great these days, although I can only vouch for it on Cortex-M. I think the code generation is on par with or better than IAR/Keil in most cases, and it's much nicer to use Intellij/VSCode/etc instead of mucking around with dated vendor IDEs.

https://developer.arm.com/open-source/gnu-toolchain/gnu-rm


I agree... as long as radios aren't involved. Getting parts with the necessary tolerances in low quantities and dealing with the board layout is kind of expensive. The analyzer you need to use to figure out why the radio is acting funny is pretty expensive too. I don't know how I could have done my senior radio project without access to my uni's pimped out RF lab.

But yeah, I've made MCU boards in my toaster oven, it's fun times for little non-RF projects.

The several gigabytes of stuff TI makes you install just to do hello world on their simple link products is kind of nuts though. There's nothing simple about it tbh.


If the sizes of the components aren’t too small, creating a custom PCB and doing surface mount soldering is not that hard.

With that said, of course the entire thing is still a lot of work and if the parts are super tiny... well...


Our OpenTRV hardware (AVR ATMega328p) can easily do a year on a pair of AAs, sending one secure stats frame every 4 minutes, eg our REV2 design:

https://github.com/opentrv/OpenTRV-Arduino-V0p2/tree/master/...

The processor is down to typically a couple of uA waking every couple of seconds to see what needs doing.

See the log at the end of this as I was originally driving the power down:

https://github.com/opentrv/OTRadioLink/blob/master/content/O...

Our target is in fact 2 years life, including driving the valve, off a pair of AA NiMH hybrids.

Slightly longer term our aim is to get off batteries entirely:

https://github.com/opentrv/OTWiki/wiki/Energy-Harvesting-Fea...

which is why I spent Tuesday at this meeting:

http://www.earth.org.uk/note-on-Energy-Harvesting-Disseminat...


I built a system once based on a Microchip PIC running with a 32.768KHz crystal main clock. The CPU draws about 11uA when running in this mode. Imagine you can do something useful with only 32768 instructions/s vs the 1 billion you get a modern CPU ;)


Our AVR based setup uses the 32768Hz clock to wake up every two seconds to then do as much as it needs to at 1MIPS - "race to idle". Almost exactly the performance (other than the better I/O and the multiply op) that I used to get out of a Z80A or a 6502 (eg BBC Micro).


> A good low quintescent current buck boost regulator can squeze every last bit of power for your chip even when the battery is basically at 0 capacity (2.7V for Lion, .8V for Alkaline) and use very little power on it's own.

While that's fundamentally true, buck/boost converters also have a level of inefficiency. Old crappy ones are 80% efficient... good ones are 90% efficient, the best may be 95% or even 98% efficient.

"Squeezing out" the last few volts of a Lithium Ion or NiMH doesn't make much sense. There's probably less than 2% of power below 3V on a Lithium Ion, and less than 2% of power below 1.0V for a NiMH.

So the 98% efficient Buck Converter ends up lowering your life, because there's so little power between 0.7V and 1.0V or so.

If you can run a circuit without a converter, that would generally be superior. For example, an Arduino can run down to 1.8V. So a 2xAA battery or a 1x Lithium Ion can be run without a boost/buck converter, and really capture that last 2% or so of power (as opposed to losing 2% in the conversion process)


>If you can run a circuit without a converter, that would generally be superior.

No, this will waste a lot of power in the high-supply-voltage region of the battery's discharge curve. There are three regimes to consider, for (say) the case of running 3.3V nominal parts off a 1S 3.7V Li-Ion cell. Other nominal supply and battery voltages are very similar, they just shift the locations of the regimes around a bit or introduce boost converters.

Regime 1: Battery voltage above system voltage. Here you want a high-efficiency buck converter. This minimizes the current you draw from the battery: instead of drawing (say) 10mA at 3.7V, you draw 10mA × 3.3/3.7 × 0.98 ~ 8.7mA at 3.7V. A buck converter is generally superior to a buck-boost as the peak efficiency is higher, and for this scenario boost functionality will never be necessary.

Regime 2: Battery voltage slightly above system voltage. In this regime the appropriate solution is a low-quiescent-current low-dropout linear regulator (aka LDO). This regime exists because of the finite efficiency of a switching regulator. With an imaginary 100% efficient switcher, you never need an LDO. The lower the draw on the system rail, the wider this regime gets. For micropower circuits (average draw in the single-digit microamps) one might always be here. For higher power draws (tens of milliamps), this regime might be too small to justify the added complexity of including an LDO and you just skip right over it.

Regime 3: Battery voltage at or below nominal system rail voltage but above minimum. The best thing to do here is absolutely nothing: just run the system directly off the battery. This is possible with a regulator with a bypass FET. Once the regulator goes into dropout, it just turns on the bypass transistor and connects things directly (with a small R_ds(on) drop of course). A few regulators have these built in. Other regulators are efficient enough in dropout that they don't actually benefit much from the bypass transistor. This also assumes you're near the end of the battery discharge curve. If you aren't, then you're potentially heading into boost or buck-boost territory, which I'm not going to cover here.

Deciding which of these three regimes are important enough to warrant the design time for optimizing is a key part of low-power circuit design.


You make some solid points, it all depends on the battery I guess.

As you've noted, a 3.7V Lithium Ion battery (which is 4.2V when fully charged) would waste a lot of both voltage and current (excess current typically runs with higher voltage) if feeding a battery directly.

But for a 3.3V part, you really want to be running a LiFePO4 Cell, with nominal voltage of 3.2V. The chemistry itself provides you the maximum efficiency, as opposed to building devices to convert a 3.7V part to 3.3V.

----------

Compensating for the flaws of a slightly mismatched cell or battery is one approach. But I have my bets that selecting the proper battery to begin with is the optimal approach.

In any case, I think I can agree with you that using a standard 3.7V (4.2V when fully charged) battery on a 3.3V circuit... it probably would be most efficient to use a pure Buck Converter. 3.3/4.2V == 78% converters would gain you energy early in the charge, while near the end of the charge... 3.3/3.6V == 92%+ converters are still a net benefit.


Li-ion isn't even that great a choice, self discharge is pretty high. That will "use" more current than your microcontroller!


They have passive WiFi systems that have no need for a battery: https://www.youtube.com/watch?v=AZ-tISX-7Cw


Research prototypes exist, yes, but Jeeva Wireless has yet to ship any parts. Once they do, they're going to be very limited on range, power and # of devices on the same band.


May I hijack to ask if someone can hack together a cheap solution for this?

My beloved dog will be flying in the cargo hold of an A380. I will be seated above in the passenger section. 15 hour flight to Australia.

I have to drop him off in his kennel four hours before departure at the freight office at the airport. They will take him to the plane.

After the flight, he will be picked up and driven about 10 miles away to a quarantine facility.

I would like to build something to put inside the kennel which will allow me to track its location at the freight office and then confirm he's been brought on board. Then at landing, I want to track the kennel from plane to quarantine. Note this is in a different country as departure.

I'm thinking of getting a used Nexus phone and a Project Fi SIM (so I can have data overseas too). However this will be rather clunky and I'd rather not put a bunch of batteries in his kennel for a flight.

Any ideas??


Any airline experts here that can comment on if you're allowed to put DIY electronics with batteries in the cargo hold of a plane these days? Especially DIY stuff with radio transmitters (and probably lithium batteries)?


from what I gather, can't put an external battery pack into cargo hold unless it's NiMH, NiCad. A cell phone would be fine, but issue would be that the phone battery would most likely die during the 15hr flight.


>you can get something as available and low cost as an ESP8266 ($1.72 in singles) down to ~.01mA to ~.005mA

Source? I have never seen anyone get an ESP8266 down to 5 uA sleep current on its own. You'd need to get a really low-leakage external FET for power sequencing.


Here is a really old blog about using NRF24 with 18 months on coin cell.

https://maniacbug.wordpress.com/2011/10/19/sensor-node/

I had a idea that i didnt follow through, but worked well (the code sucks, i know).

https://github.com/birkir/sensornodes


TI actually makes a specialized power regulator for their OMAP product line to do exactly that. It's this weird little part that includes the power regulating circuitry and some of the analog audio codec stuff for some reason.


I don't understand the obsession with coin cell batteries. Why not use an AAA or AA battery? There's clearly room for one. Coin cell batteries are a lot more toxic than ordinary AAA or AA batteries.

In this case, they had to jump through quite a few hoops to get 1-2 years of battery life. Is sampling every 30 minutes really practical in all applications where someone would want to use a TI sensortag? It's probably easier to just modify the sensortag to use a larger battery and get the sampling rate you need. Depending on the situation, figuring out how to use a C or D battery might be easier than all the compromises made.

My garage door openers have plenty of room for AAA batteries, or could use AA with minor modifications; but they use coin cells. I just don't get it.


Coin cells are common in marketing of microcontrollers and the like and as such is somewhat a mark of reference. There are also a good amount of devices (eg BLE beacon trackers) where AA or AAA would be too large. But if you can, use larger batteries :)

Coin cells aren't the best for this kind of system. When the current is a few tens of mA (CPU + transceiver), even for a short while, the coin cell battery internal resistance goes up, lowering the voltage over the terminals. This means that as the battery gets older, the risk of a brown-out reset of the SOC increases, even if there is still capacity to run a load with lower current requirements.

You could to some extent reduce this by using capacitors to reduce peak current over the battery, but caps have leakage current too, so the actual lifetime could be less still. Damned if you do...


I don't think I've ever seen any design that didn't have decoupling caps across the power lines. I mean, the Murata parts I use have dozens of megaohms of resistance at DC, and the chip is off 90% of the time anyway so there's no energy loss keeping the E-field charges in the cap when the SoC is only drawing whatever miniscule amount of power it needs when it's asleep.

Anyway, TI has stuff like this specifically designed for these applications, so it's kind of a nonissue these days:

http://www.ti.com/lit/ds/symlink/tps62730.pdf

I think Mouser had them for like $0.50 or something.


You can't weld regular batteries into a lot of applications, and a lot of IoT applications are mobile and vibration sensitive. You'd be surprised how just a little vibration can really mess up your power supply as the springs bounce around.

CR2032s have a much higher surface area on the contacts, which is required to dissipate the heat from the weld as well as have more surface contact with the springs if you use a battery socket instead.


>You can't weld regular batteries into a lot of applications

Not true. Take a look at an Amazon Dash teardown. Stock AA (or AAA? Can't remember which) welded into place.


Terminal voltage on a coin cell is 3V, so you have to use a pair of AAAs, not just one. Ends up requiring ten times the volume.

Shelf life on lithium batteries is longer, and they don't leak, so you can ship and store sensortags with the battery preinstalled.

My impression is that it's way more of a logistics thing than a customer-value thing.


I chose sensors that use AAA batteries over coin batteries frequently. Give me a little bit more space usage for significantly better ease of use and reliability. The only thing I really see use coin batteries successfully are IR remotes.


Related is this project for running LED's for a long time just brightly enough to acts as markers: https://github.com/tedyapo/tritiled


That's a super interesting concept. I'm also surprised by the complexity. Why does a low-power LED need a programmable microcontroller, multiple capacitors, a pair of mosfets and an inductor?


The inverter is doing all the work - the circuit is essentially a strange shaped buck controller. What it's doing is building up energy in the inductor and dumping it all at once into the LED.

The microcontroller is there as a programmable pulse-width generator. Yes, you can do this with a NE555, at the cost of much more power and more components, and they don't like low duty cycle and are less temperature-stable.

Having put the microcontroller on there, you must also put on its required decoupling capacitors. You might be able to remove them in a situation this simple with a bit more testing.

MOSFET Q1 is required because the peak current is quite high in the inverter, more than 20ma. It also helps to steer the inductor current into the LED rather than the protection diodes of the PIC. Note the LED is "upside down".

MOSFET Q2 and the resistor are a convenience to prevent the circuit being destroyed by inserting the battery the wrong way up.

More at https://hackaday.io/project/11864-tritiled/log/62875-v22-rel...


Check out the hackaday project page for more detail https://hackaday.io/project/11864-tritiled I should make clear this is not my project.


Sweet! I've been looking for something like this for a long time, thanks for the link!


Note that only the sensor runs for a year on a coin cell. The wireless mesh router nodes are not mentioned.

Somebody has to be listening when coin-cell device wakes up. If the listening device is always on, that's not a problem. That's not feasible for low-power nodes. If listeners are only powered up intermittently, getting someone to listen requires some kind of synchronization system.

I have a simple outdoor thermometer which contains a solution to this problem. The sending unit goes outside, for best results sheltered but not near a wall that leaks heat. The receiver goes inside and has an LCD display. Both sender and receiver are battery-powered, and get over a year on two AA batteries.

The sender wakes up every 30 seconds or so and sends blind. The receiver wakes up just before each expected transmission. When you replace the batteries, you have to replace them in both units, and when the receiver is first powered up, it stays powered for a while until it's heard the sender a few times and is in sync.


I raised this objection downthread: https://news.ycombinator.com/item?id=14848196

@adunk says mesh nodes synchronize clocks so they're all listening at the same time, but intermediate nodes end up drawing enough power they can't actually run on a coin cell.


You can buy Sensortags with sub-1GHz RF transmitters, which have a (theoretical) range of >500m, which should be enough for most applications to not require a mesh network at all. They also accurate clocks (with a crystal) to do time-rendezvous networking as well.

I've some intended for home monitoring-- they have "10 sensors including support for light, digital microphone, magnetic sensor, humidity, pressure, accelerometer, gyroscope, magnetometer, object temperature, and ambient temperature" for $30.


The microphone doesn't work, TI never released packaged firmware for it. There's source code available that should work, but I haven't seen anyone on the forums that managed to get it integrated into an application, much less even build it.


Although not CO2 or CO or smoke, all of which would be useful. For HVAC purposes, light, humidity, pressure, CO2, CO, and smoke are the useful ones. Leave out the microphone for privacy, but consider a passive IR motion sensor.


Jack Ganssle wrote a much more thorough article on the same topic here, if anyone is interested in the EE nitty gritty.

http://www.ganssle.com/reports/ultra-low-power-design.html


Measuring once every 30 minutes, and waking up once every 5 minutes isn't a lot. What surprises me most is that the sensor uses so much power.

My Casio CMD-40 smartwatch has a calculator and programmable TV remote. The CR2032 coin cell inside is enough for it to last about 2 years. The display is always-on, and I use the calculator often, although admittedly I rarely use the IR transmitter. So apparently sensors use a lot more power than an LCD and calculator.


It's the radio that is consuming most of the power.

Even using your IR transmitter to send a short burst every 5 minutes wouldn't be as much power as if you had an IR receiver that had to listen to incoming signals. But still, start using the IR transmitter every 5 minutes and you'll see a dent in battery life


It's true, and radio technology has a lot of inefficiencies in the analog bits. Digital RF techniques help, but the Rx amplifier and downconverters, mixers, and stuff all lose a lot of energy in the process.

Also, the watch has a clock frequency measured in kilohertz, and the processor is probably some dumb as rocks 8051 variant or something with a tiny amount of static RAM. Big difference in power consumption compared to a 32-bit ARM core.

Cortex-M0s are pretty competitive with that power regime though these days, especially if your application (like the watch) will allow you to run the M0 at sub-1MHz speeds.


Waking up only every 5 minutes limits the usefulness of this, doesn't it? It's not exactly something that they can use to respond to random, arbitrary events. Though if it's used to sample something like, "how many people are currently waiting at this crosswalk", that's pretty good.


It is possible to react at any time because of an external hardware event, such as a button press or a hardware sensor that triggers. Depending on the sensor or external hardware, those events can be very cheap in terms of power consumption. For example, a button switch can be completely passive and consume power only when it triggers. Once it triggers, the microprocessor wakes up and can decide to handle the event, potentially by using the radio to send a message. Sending the message will consume much more power than just turning on the microprocessor, so depending on the application logic the microprocessor can simply choose to ignore the event and go back to sleep.

"Waking up" in the context of the article should be taken as "waking up an turning on the radio to check if anything happens that I should be aware of". This involves power-consuming activities, such as turning on and using the radio, and cannot be completely passive. That's why it pays off to not do this too often.


Sure, the local node can wake up at any time, but if it has to transmit over the mesh network, then it has to wait for the remote node to wake up.

How does the mesh network handle deep sleep, anyway? Do they synchronize clocks so they all wake up at the same time, or does a node just transmit continuously until someone reasons?


The mesh nodes have to run either in always-on mode (which makes perfect sense if they are powered by, say, a wall socket, which they often are) or in a sampled listening mode. Sampled listening shaves some 98% off from the radio duty cycle compared to always-on mode, but still is not enough for the mesh nodes to run on coin cell batteries. Give them a larger pack of batteries and they'll run for months though.

To send data to a node in sampled listening mode, the sender sends a string of smaller wake-up packets that indicate when the sender intends to send the real data packet. When the receiver picks up the wake-up packet, it knows when it should wake up again to receive the data packet, so it can safely go back to sleep again for a while. And if the sender knows that the receiver is in always-on mode, because it is powered by a wall-socket, the sender can skip sending those wake-up packets, saving a bit of power.

This requires clocks to be synchronized, but only loosely - millisecond synchronization is enough. The trick is to strike the right balance between communication responsiveness and power consumption.


The BOM indicates that the clock oscillator is just a vanilla quartz crystal. http://www.ti.com/lit/df/swrr135b/swrr135b.pdf Does the firmware do any clever temperature compensation using the onboard thermometer, or are you just sending lots of clock skew correction packets as the mesh warms up and cools down diurnally?


For this level of time synchronization it is enough to send a few extra bytes with synchronization information in each packet. If you'd be down at microsecond-level synchronization, you probably need to do temperature compensation - particularly in outdoor deployments where there can be a 40 C / 100 F difference between two nodes on a sunny winter day.


Yeah it's just a regular XO. There's some calibration data they stick in a special flash area at the factory that the radio firmware uses to handle temperature compensation for the chip's internal digital oscillator. I also believe you can run it in a XO-less mode with some caveats.


Depends on the application of course. A 5 minute sampling time for some applications (erosion and climate studies comes to mind) may be overkill. And a system that needs to respond to random events would be designed differently, with external hardware interrupts as opposed to timed interval sleeps


Not everything has to be reported right away. A temperature sensor could send its data every x minutes and meet the users's requirements.


The article made it seem like these devices are meant to periodically wake up to listen for some sort of instructions being sent to them, but you're definitely right.


The Trick is to get all your silicon dark, by having a external timer that reactivates the chip - or have some ecu that supports a core-mode.

Another trick is to not use strong transmitters, and build a phased array from a mesh-net of sensors. https://en.wikipedia.org/wiki/Phased_array This needs incredible sync timing, knowledge of positions and knowledge of position of receiver.


Sounds like the phased array approach would be expensive, complex, and incredibly hard to get working well in real deployments :)


Technically, WiFi beam-forming, which is part of the standard since 802.11n and is really taking off with high-end 802.11ac routers, is a phased array to all intents and purposes.

The future is here.


People forget that once upon a time, radios didn't need batteries at all.

Even up until the 70's, companies made speakerphone amplifiers that didn't use any electricity at all. I have one that I use with an iPod, and the sound fills two very large rooms.

Once "transistor" became a marketing term, every industrial design started with a battery, instead of finding creative ways to actually solve the problem first.

Man, I'm old.


Share links to some of these non-battery "electric" devices!


I had a professor that did his thesis on a 10 year coin cell powered wireless sensor. Although it was a transmit only protocol. Pretty well understood area - send less data, put the system to sleep for as long as feasible.

http://www.firner.com/research.html


Transmit-only mechanisms are definitely the best way to go to reach the extreme minimum in low power levels. The problem with transmit-only systems is that they are difficult to use in most real-world situations. Once you have deployed a transmit-only sensor, you are stuck with the configuration that was hard-coded into it. There is no way to change how often you want the sensor to report, how and when it should trigger, and any other parameters and requirements that may change as the project develops - encryption keys being one example.


The EnOcean protocol is a good example of this. Their sensors don't use batteries at all, instead opting to charge a capacitor from small PV panels or peltier generators.

https://www.enocean.com/en/technology/energy-harvesting-wire...

https://www.enocean.com/en/technology/energy-harvesting/

This is well suited to stuff like occupancy sensors in buildings. Time resolution isn't as small as an always-powered system could be, but if your use case for the data is "Turn off the lights if we haven't seen anyone move for 30 minutes" then it doesn't much matter if you transmit every 5 minutes instead of every 5 seconds.


Transmit only for the high power active radio is often a great way to save power, so long as you have an alternative data path to send information to the device to change settings or do firmware updates. For example, see the Omni-ID Power 415 active RFID tag which transmits up to 400 m distance with its active radio every few seconds for 5+ years of battery life but also has a passive 900 MHz RFID interface that's bidirectional for changing settings or doing firmware updates.

http://www.rfidjournal.com/articles/view?11873


Definitely, although the inability to receive can be considered a security feature for some applications. Wireless hackers have to go through a receiver :)


You don't have to sacrifice configurability to get the power benefits. Send the data transmit-only, but every few days have a receive window.


On the subject, worth reading Charlie Stross' blog post, 'How low (power) can you go?' [1] and its potential impact on urban architecture and environments.

[1] http://www.antipope.org/charlie/blog-static/2012/08/how-low-...


He puts out some interesting ideas, but he forgot that genome sequencing requires some occasionally toxic regeants that are incompatible with bolting DNA sensors all across town.

He's spot on about the data firehose problem, though. IoT data usage patterns break all kinds of assumptions that network and storage engineers have about how computers do stuff. The packet sizes are really small, so routers have to work way harder, and all those tiny I/Os blow through IOPS way more than a typical web app does.

And then there's the data formatting issues, like how searching this pile of stuff makes you want to put it in columnar formats but time-series type displays want row oriented formats. Oh, and the indexers. Ugh. A lot of good indexing code has trouble keeping their internal trees balanced with all the I/O going on at the same time the big data reporting stuff is going on a spelunking expedition through your storage. There's probably some group at Amazon going insane trying to come up with solutions to keep up with even the limited IoT deployments out there already. I wouldn't be surprised if their storage solutions are operating in a constant state of imminent capacity breakdown of some kind.


There is a Dutch startup that uses wireless signals to extend battery life. Much easier. They are rolling it out for Schiphol Airport if I recall correctly. Schiphol had a fulltime employee just for changing batteries all over..


Samsara [1] does 3+ years on a CR2032

https://www.samsara.com/products/models/im31


I would have expected a low power TI solution to use a MSP430 instead of the ARM. It would be an interesting to see the tradeoffs for a comparable MSP430 solution.


Some of the radio handling bits on MSP430 would be more complex due to the lack of 32 bit support. There's also some third party firmware for e.g. the lowpwan stuff that's only available as Cortex binaries.

TI keeps trying to push their MSP432 cortex-M4 series on me as a replacement for MSP430. I guess they don't want to have customers go with a different vendor just because TI doesn't have an ARM solution that supports whatever code module or toolchain the customer wants to use.


Thanks, I made a prototype a couple years back where the MSP430 was sufficient and I wanted to move it to llvm to solve my only toolchain annoyance.. But I have trouble seeing the point of investing time as ARM edges closer on the battery advantage..


I'm actually building a product that incorporates a CC1350 in it, which is the variant that has both a 2.4 GHz and a proprietary sub-1GHz radio.

These chips are totally amazing. TI used every trick in the book to help the designers save power. There's even a specialized coprocessor that uses special low leakage silicon to handle some simple 16 bit I/O processing so you don't have to wake up the main processor, and the main Cortex-M3 is pretty darn low power already.

I am kind of annoyed that you have to call into their ROM routines to use the radio, though. The datasheet is a snarky little tease that says ha ha, the radio registers are somewhere around these addresses but we aren't going to tell you what they do! Shhhh it's a secret! This is particularly annoying because I could make my life a lot simpler if I could interface with the Nordic shockburst protocols, and TI's firmware only knows how to speak TI simplelink packet formats.

The thing that TI isn't talking about is that they did some stuff around the voltage regulation that looks like they were planning on pushing some kind of energy harvesting companion silicon. It's just a guess, but they might announce something in the future along those lines. Whoever ends up first to market with a productized energy harvesting IoT chip is going to print bucketloads of money.

There's also a lot of stuff going on in the low power LTE space right now. Altair semi announced the 1250 chip, which is -- get this -- power competitive with the TI chip for low frequency telemetry applications. AT&T and Verizon have already launched their cat-M1 networks and cat-NB1 is going to be out by the beginning of next year. Those protocols use oversampling and a bunch of other mad scientist RF hacks to get even more range than the LTE radio in your phone. The bandwidth is really low, like modem or ISDN low, but there's a ton of applications that can be implemented with literally one bit per day worth of data. The chips are getting pumped out in volume now, and the cost regime is actually causing revenue jitters at the carriers. They want to squeeze some extra revenue out of LTE IoT space since it's basically free for them to implement since the LTE IoT designers figured out how to reuse the otherwise useless guard bands surrounding the real full-fat LTE signals your phone uses.

However, it's kind of a tough problem to figure out a cost model where customers are paying something like $10 a year for maybe hundreds of devices that are so cheap that the most expensive item on the board is the SIM card. And even that's going away in the 2nd generation IoT chips now that someone figured out how to get around the UICC requirement for provisioning by doing something funny with a protected element in the main LTE radio.

Also check out my profile. It's not a joke, someone made a homebrew monitoring solution out of someone else's IoT chip to keep track of cow farts because they wanted to adjust the feed based on farts per minute or something, I didn't really understand it.


we do ( http://outpostcentral.com / http://www.mywildeye.com/wildeye/ ) GSM based monitoring. We had mesh tech at one stage, but pretty much stick to GSM these days. LTE IoT bands are starting to be viable as it gets rolled out. Our battery tech can get quite a number of years. SIM costs are tiny these days ( for us at least ).


Devices don't need actual SIMs these days (e.g. my Samsung watch doesn't have one). Do they still need the chip that's on the SIM? (hence no major cost savings)?


They never needed it from a technical perspective, it's there to make provisioning easier. Because the carriers were pretty decoupled from the baseband ecosystem, there wasn't a really good way to get the subscriber keys into the radio hardware unless the carrier stuck their nose in the supply chain somehow. Because baseband vendors all hate each other, there wasn't a lot of interest in cooperating to create a standard to do something like that. Plus, Gemalto kept trying to throw monkey wrenches into the committees by doing some quite frankly pretty messed up things.

Regardless, profit is important and someone had to get squeezed out of the BoM of these things being manufacturing in the billions. It finally happened because of a combination of improvements to system on chip security, CMs cracking down on security, and the carriers hauling their EDI based provisioning systems into the 21st century. As a result, carriers can now securely provision devices after manufacturing. AT&T is kind of a jerk about it though, they do some stuff to the SIM when you onboard an unlocked device sometimes to tie it to their network.

I'm still not entirely clear how the keys get distributed in a SIMless world though. The process I use to onboard stuff into Verizon's IoT cloud involves me uploading a CSV file to a server somewhere and making some REST requests, but it's just IMSIs. The virtual UICC in the products like your watch works pretty much the same way, according to my chip vendor. But they have a multicarrier solution where the virtual UICC already knows about the major networks, so maybe they're exchanging keys as part of the OTA activation flow and securing it with a hardware key they give to the carriers in a HSM or something. Or maybe the manufacturers are getting HSMs at the factory and doing it right in the manufacturing process. I tried to wrap my head around the 3GPP documents on the subject and I just got more and more confused.

There's definitely a vendor proprietary aspect to what's going on though, because I can see the OTA provisioning packets in QXDM and it says it doesn't know how to decode them.


Cool! Maybe I'll see you at the next convention we end up at.




Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: