Hacker News new | comments | show | ask | jobs | submit login
If You're Programming a Cell Phone Like a Server, You're Doing it Wrong (highscalability.com)
178 points by consciousness 959 days ago | past | web | 75 comments

> Every decision you make should be based on minimizing the number of times the radio powers up.

This is lunacy. Ok, lunacy is a bit strong. But I disagree with this and am throwing a "premature optimization" flag.

Modern phone batteries last plenty long, and the radio being on is nothing compared to the big bright screen.

/edit Given the opportunity to chose, I know I would gladly sacrifice a few minutes per charge battery life for a better user experience, especially since my phone never gets below 20%.

Your users will be better served by you fixing bugs or adding features. Really, unless you're a huge team with a huge budget, there's other stuff to worry about in your app experience before "maximizing battery life" should be a responsibility you want to help the OS/device maker with.(If you're one of the lucky ones who has the time and money to do both, by all means, go nuts.)

Being conservative with resource usage is sound advice. Making battery usage the prime concern for most apps is overkill.

Games and other applications that you know users will have open for extended periods of time, and/or that are already eating up battery should give this issue some thought, everyone else, really, don't worry about it.


The difference between DCH (the full-on high power state) and PCH (the lowest power, waiting-for-paging state) is about 2-orders of magnitude. Last time I measured, it was about ~100mA vs ~1mA (at ~3.7V) used by the radio. So, it's not couple of minutes of battery life, but rather _many_ hours of standby life instead. People usually aren't very happy when a fully charged phone dies overnight.

I've seen apps do some crazy things and it really has a significant effect in over-all battery life. A popular Android weather clock widget woke the phone up every minute to update the minute number on the graphics and updated the weather information every ~15 minutes (gps + radio!) which single handily crushed the standby battery life from multiple days to less than 8 hours.

Yes, I don't think _every_ decision should be based on minimizing the wake ups... but on the other hand, all developers should at least try to have as much understanding of the platforms that they're working on so that they know what trade-offs they're making with each feature they're adding.

I'm glad that Google has these videos available and that they're being picked up in places like HN.


Anecdata: I turn my phone's wifi off when not required because it has a significant effect on the battery life - particularly when I'm not in range of a wifi signal. I've had a full charge die on an 8-hour country drive because wifi was on. On the return trip, wifi was turned off and it behaved as expected.

Go for a long walk in the park? All that time your phone's wifi is straining to find a wifi signal. It's not as big a power sap as the screen, however it's constantly, silently in use and the screen isn't.


Wifi scanning is also pretty terrible for power consumption as you found out.

If you are on Android, my guess is that your phone was set to be connected to Wifi during sleep. You can control it by going to Wifi networks section -> menu -> Advanced -> Keep Wi-Fi on during sleep, then setting it to "Only when plugged in" or "Never".

If you're driving around with the phone set to leave the Wifi on during sleep, it'll be constantly going in and out of range of various Wifi connections. If the Wifi part is set for passive scan, then the processor needs to be kept up as the Wifi chip attempts to collect the BSSID of all the network it see and processor tries to figure out whether these networks have been seen or not. If it's set for active scan, then the Wifi needs to power up the radios to transmit beacons. Pretty bad news in either case, unfortunately.


I personally setup Llama to turn the wifi when I'm not connected to a tower that I previously defined as "wifi available". It's nifty because I never have to explicitly turn wifi off then back on during my travels. To add a new wifi zone, I simply add the cell tower to the previous list and let the app do the rest.

Llama by itself uses less battery than leaving the wifi chip on, even in passive scanning mode, as the phone is always looking and checking cell towers.


That's a clever idea! Couldn't Android track the "WiFi available" cells itself when you connect to a WiFi network?


That kind of frequent-on behavior is why the iPhone 5S has the new M7 chip.


What the hell? The M7 is a marketing label for some COTS silicon they licensed from some IP company somewhere (probably ARM/Broadcom or one of their associates).

It doesn't make unicorns shit rainbows or defy the laws of physics.


Your comment isn't only disrespectful, but also misses the point. It doesn't matter if the M7 is a custom design or bought off the shelf. As long as it needs less power than the application processor to sample the sensors at some frequency, it fulfills its purpose.


Respecting known limitations and working within best practices is absolutely not premature optimization. If you know that using the mobile radio in a certain way is a source of excessive battery usage, it's silly to just disregard this information. It's not premature optimization if you know where the performance issues are from the outset, and these performance issues are so incredibly common that the Android team put out videos about them.

The mobile radio will eat your battery very quickly, and probably chews through as much power as your screen. If you want to see how expensive it actually is, disable fast dormancy†. A slightly less brutal demonstration can be had by opening an OpenVPN connection. The keep-alive packets will keep your mobile radio in a higher power state, pretty much in the same way some disrespectful apps do and you'll (quite unsurprisingly) see your battery drain faster.

A badly written app can drain hours of battery charge, not minutes.

> Your users will be better served by you fixing bugs or adding features.

Excessive power drain is undoubtedly a bug. I absolutely love apps that respect the fact I want to go as long as possible without charging my phone. I might often be in a position where I can't charge my phone too. (Carrying a second or third battery is useful, if you've got a phone that has a replaceable battery.)

> Really, unless you're a huge team with a huge budget...

It's quite trivial to think about performance and battery life. This is something every mobile developer should be thinking about from the get-go anyway, and if you can't afford to write an app that performs well, you can't afford to write an app.

> Making battery usage the prime concern for most apps is overkill.

I'll uninstall apps that I identify as too power hungry without a second thought, and I'll also write negative reviews too. I'm pretty sure that when put in this light, power consumption suddenly becomes important.


† Don't do this. Some phone networks are misconfigured and may fail to ever recognize that the phone is FD-capable again.


I just want to add in from a network carrier perspective.

I work for a mobile operator, and have had to work on several network issues related to poorly designed apps. I agree with kintamanimatt on this, it's not premature optimization, you need to know the cost of what you're doing. There is an incredible resource cost, to all the naive developers making apps these days, that treat the wireless interface as an always on connection.

1. Server polling every minute We actually sent one customer an invoice for over $100,000 because they had a particularly bad application that polled a server every minute. This was only an internal app used by a few hundred people, but they used it at their office, and it caused constant blocking on the cell site covering their office. This actually broke cellular coverage near their office, and caused issues for all the other customers in the same area. Ultimately, we said, we can help you fix you're app, you can pay the invoice to upgrade cell coverage to you're office, or get off our network. In this instance, we helped them fix their app, so it only interacted with the server when something needed to be changed (the server pushed changes), which was actually quite easy to do. This simply isn't the same thing, as writing an inner loop in assembly, to shave off a few microseconds in runtime, which would be premature optimization, and actually making robust software that works well wherever it is used.

2. Synchronized network access This one has been the bane of us on a few occasions. If you write a network access to be like a cron entry, that run's not only every 15 minutes, but does so at exactly the same time between all devices. We get to hunt you down, and scream, as our wireless network crumbles. When every device, wakes up at the same time, and asks for a higher power state, at exactly the same time, the network will only process so many. The funny thing is, you'll test this yourself and not see anything, but then when you release your app, you might find 30% of those checks are failing for some reason, and struggle to figure out why. For any one who is thinking, well the operator should add more capacity, the answer is we do, but ultimately the customer pays for the network build.

3. High usage apps What happens is, your device vendor, is in the interest of protecting it's customers as well. We once had an app on our network, that would treat all reject codes, as a cause to retry. This caused it, to every time it was run, begin uploading data, over and over again, leading to thousands of dollars in bills to the customers for excessive data usage. The device vendor, pulled the app from their stores.

Now, how do you feel, about having to plead the case why you should be allowed to continue to sell your product, not only to the device vendor, but the carrier who found the problem, prove that you fixed it, have our bureaucracy test that you actually fixed it, and maybe if we feel like it, let you sell again. It's not as simple as fix the bugs, you might actually lose you're ability to make money of you're software, for months while this get's sorted out.

What do you think you're store rating will be, when all your customers get sent a $5000 bill for using your app?

Now the question is, should there be a better way, and I'd like to see more from mobile vendors, in making it stupid easy to be smart about this. In the phone API's, if I need to do a background update, I should be able to register with the OS, and say, I have an update to do, within the next 15 minutes, let me know when you have an active radio connection. You're phone goes into a high powered radio connection, and bam, all you're background updates now go out together. Need to send notifications, here is our push API, only interact with the mobile when an update needs to be done. If the API's you're using, are designed for mobile, and take alot of this network stuff that you shouldn't need to know into account, it's a lot easier for the naive developers to do better by blind luck.

-- * my views in this post are my own, and do not reflect those of my employer.


Thank you for this insider's insight. It's not something you come across every day!

Aside from these things, is there anything else as a mobile developer that I can do to make your lives happier? I'm not guilty of any of those things you've mentioned and try to write well-behaving apps, but I'm always on the look-out to find out about stuff I don't even know I don't know!

Also, perhaps you could you lobby your higher ups to put together some kind of best practice guide from a network operator's perspective. I know the Android team do touch on this in their guides and in the Google I/O conference talks, but I'm guessing there's additional information that would be beneficial.


kintamanimatt, there are definitely a few other things to look out for. One thing to be careful of, is my perspective is based mostly on an escalation standpoint, so I don't necessarily get a complete view of the ecosystem. Also, without knowing what sort of apps you develop, it's hard to be too specific, since some items are more relevant to void, than they are to web, etc.

1. Power States On UMTS, if the radio is "idle", it won't lose packets, but just a small amount of data, isn't enough to trigger a transition to a higher power state. These packets generally won't be lost, but can take 2 seconds or more to transit the RF network. If a transition is required, from the lowest power state, that alone can take 500-800ms. The important lesson from this, is that when just starting out a TCP/IP connection, it could be very slow for the first little bit, until the phone/network realize there is more data passing.

On LTE, this is less of an issue, as there are only two power states, which are idle and active. As such, you always have to transition to an active power state to do any data, and the transitions have been significantly optimized. However, the transition can still take a few hundred ms in this case.

This mostly becomes an issue, when you want to do something in real-time. We see lots of reports and have many investigations on things like call setup time on voip connections, etc. From an app perspective, this can be pretty hard to account for, but with the move towards LTE, this will constantly get better for you, since it's both faster and simpler.

2. Packet Loss This point is hard to get across internally sometimes, but packet loss will occur. It's a radio network, and there are interference sources, and situations where we can't deliver every packet within a guaranteed time. You'll also get temporary losses of connectivity (user goes into a tunnel). This can get even weirder, where we get stuff like the uplink is working, but the down-link has very high loss.

We have seen many DPI, IDS and Firewall vendors have problems with this packet loss, and may cause inspection to fail, or the connection to get delayed. One thing that often happens, is these vendors don't use a large enough reassembly buffer for a mobile network. With the higher latencies we get on wireless networks, there tends to be more in-flight data at any one time. On CDMA, when a re-transmission was required, more than 100 packets of in flight data could pass before the ack/sack indicated the packet was to be re-transmitted.

3. Initial Window On the server side, it should be fine to set the initial window to 10 packets as per the ietf draft: http://tools.ietf.org/html/draft-ietf-tcpm-initcwnd-00 I haven't had an opportunity to test this out, but my gut feeling is this should actually help, as the more pending traffic will help indicate what's happening from a power state standpoint. There are guides for many different OS about how to change this setting, as it won't be applied by default.

4. Exponential Back off This probably isn't so important at you're level, but we've had issues more at a device stack level, that if some piece of the network is lost, we get two problems. Fixing the piece of the network, and fixing the "mass calling event", which is all the devices trying at the same time to reconnect to that resource. This is mostly out of you're control, but perhaps keep this in mind for your own server's benefit. If your server goes down, don't have all you're clients go nuts trying to re-establish connections as fast as they possible can.

5. SMS Just in case you use SMS in any way, perhaps as part of push, one property of SMS, is if it's not successfully delivered on the first try, when it get's queued for redelivery, it could be delivered several hours later. Also, at least on our network with the newest technology, we've had some bugs we've had to look into with the same SMS being delivered multiple times. One property of SMS you can also take advantage of, is if the device is out of coverage (powered off, tunnel, etc), when it comes back into coverage, the pending messages should be delivered.

6. Reject Codes HTTP and many other services usually have the concept of transient failure and permanent failures. Make sure not to retry on permanent failures. I got at this in my previous topic, but we've seen more than once, where something like a large email get's stuck in an outbox, because the mail relay has a maximum size limit on it. However, the rejection occurs after the upload, so the device sits their constantly uploading the same email over and over again. I'd even be careful with transient failure, and if the transient failure occurs for more than 3 or 4 tries, or an hour, to give up.

7. Compression If you can, you may as well use compression to the best of your ability, to keep resources smaller and faster. Even though the network is faster, it'll still deliver a smaller payload faster than a bigger payload.

8. Packet Size This one is often missed, but can be hugely important. And I almost forgot to tell you. What happens in a mobile network, is you have IP traffic that is destined towards the mobile. However, the mobiles position and pathway isn't fixed, so our network has to track the user and update the path. The way this is done, is network equipment will encapsulate the IP traffic with additional IP headers, for delivery internally within our network. However, our network has the same limitations for maximum packet size. So what happens, is when we do our encapsulation, we have to chop the packet into pieces, and then put it back together on the air interface. For awhile, we also used to just IP fragment you're packet into smaller pieces, but unfortunately this is also problematic as devices don't necessarily do the best job joining the fragments either, especially if they get delivered out of order.

So on my network, we use something call mss clamping, to limit the maximum size of TCP payload data in a single frame. We do this, so that when we encapsulate the packet, it will fit into one packet with our headers. However, MSS is a negotiation that only happens on TCP, so it cannot happen on UDP traffic. I also know for a fact, that not all carriers will do this, and I have talked to one or two developers about why their app will work on our network and not others. This is something you can adjust server side, to be consistent across carriers. As such, on the public interface of the server, I would recommend something like a MSS of 1350 bytes, which is enough for our internal headers, and IPSEC, but not require the packet to be chopped, than reassembled by the network.

As for lobbying the higher ups, I'll see what I can do. I think it's a great idea, and carriers worldwide aren't doing enough in the space. However, this is a really tough one, since as a large company, communications and branding are greatly metered, and it doesn't help to do all this work, and have no one read it anyway, because app stores today seem to be about volume, not quality. Really, what I'm seeing on my side of the network (The Packet Core), is a lot of the technology development, is to allow the network to be more flexible and robust towards the way it is used, than attempting to control the ecosystem. Ultimately what happens more and more, is we get just strait PC's connecting to our network, and we want to offer a superior experience in this space. Also, the most efficient devices are losing, where blackberry used to be an order of magnitude more efficient than everybody else, has largely been eroded by having a faster network and competing devices although less efficient, deliver superior experience.

However, a little bit of searching did turn up some initiatives. Not so much best practices, but in Canada the major carriers put together a consistent API access for location, SMS, and billing. http://canada.oneapi.gsmworld.com/

Also, it looks like ATT has some information, but it looks like it's more geared at enterprise, and most of the content is locked: http://developer.att.com/developer/forward.jsp?passedItemId=...

Hopefully we will continue to see, more partnerships among carrier, more standardization from the 3GPP and GSMA, and better API's to get the ecosystem more mature, and it'll be better for everyone. I'm also hoping to see more work on SCTP or multipath TCP, so we can start to see connection level handoffs between different access technology, ie wifi offload when you're at home. There are some technologies for call continuity today from you're wifi, but they're amazingly complicated, and only work in very specific scenarios.

-- * my views in this post are my own, and do not reflect those of my employer.


Thank you! Seriously, this was very insightful and will be helpful! I hope more app developers see this comment too.


The thing is that recognizing something as premature optimization isn't the same as disregarding it. Developers that spend all of their resources optimizing for minimal battery life impact or other minutia are just as bad as developers who refuse to publish their SaaS app until they're confident that the infrastructure will support one million concurrent users.

Release early, release often. If users love your app after release, focus on fixing the littler things like battery usage. If users hate it, try to fix the things that make them hate it, and then focus on the battery life.

Caveat: As in all optimization discussions, there are reasonable minimum thresholds for acceptability even in the "unoptimized" state. Desktop software can't take thirty minutes to boot no matter what stage of life it's in, and mobile software can't kill your battery in an hour or crash the nearby cellular network at any point either.

Don't make your software egregiously bad; just make it good enough.


This is what I was hoping to get at, thank you.

"reasonable minimum thresholds for acceptability" is a great phrase that I will certainly be stealing.


>Respecting known limitations and working within best practices is absolutely not premature optimization.


I'm not saying throw caution to the wind and poll a remote host constantly and indefinitely, as I said originally, I think it's right to be concerned about resource usage. This is just a baseline for decent software. Be a "good citizen" and all that. (In most cases) If you can make one query for 100 items, rather than 100 queries, or 10 queries, do that. But that's just common sense.

>Excessive power drain is undoubtedly a bug. I absolutely love apps that respect the fact I want to go as long as possible without charging my phone.

Again, I think we are in agreement. Excessive drain is a bug, but my experience is that if you're already following best practices and not doing anything wild, you won't be a battery hog.

The article makes a HUGE list of do's and don'ts

I think it can be a lot shorter:

* just because you can, doesn't mean you should

* learn and follow best practices

* be mindful and conservative in your usage of all resources on the system

> I might often be in a position where I can't charge my phone too. (Carrying a second or third battery is useful, if you've got a phone that has a replaceable battery.)

(gadget aside) Even if your phone doesn't have a replaceable battery, there are a lot of slim, rechargeable battery packs you can buy for around $20, many with short little USB cords that fold up in to them and others where you can use your own cord as well. The nice part is it will still be usable even if you change phones, and can be shared with other devices and people if needed. Some of them will even let you charge them inline with the phone so you're only using one usb port.

What are you doing on your phone/what phone do you have that you need 3 batteries?


It's premature optimization if you're caring about this in your prototype. It's not at all premature optimization to look at battery life before you ship.


There's no such thing as a prototype.

The design decisions you make early on will stay with you. How you treat "occasionally connected" users is likely to have some architectural impact. Keep their requirements in mind as you sketch out your application.


I can understand what dllthomas is talking about. I'd never write a full-blown prototype, but whenever I'm writing a non-trivial app for myself it can be quite productive to write 10% of the app, throw that away, and start again. This strategy always gives me a better understanding of the problem, the app's architecture and so on, as well as a ton of new ideas.


Right, that's pretty much what I was talking about. Not so much something "officially blessed as a prototype" but "does this make sense at all?" experimentation shy of an MVP. I've heard "write it, throw it away, rewrite it" attributed to Knuth, and while that's overstating it a little you definitely learn things in your first attempt.


Fred Brooks, maybe?

> The management question, therefore, is not whether to build a pilot system and throw it away. You will do that. […] Hence plan to throw one away; you will, anyhow.

But as someone on Ward's Wiki said http://c2.com/cgi/wiki?PlanToThrowOneAway ,

> The corollary to this is that if you plan to throw one away, you will end up throwing away two.


Yeah, I've seen that one too. Could very well be memetically related (through Knuth or not).


Lunacy is too strong!

I think the average mobile app developer has no business thinking about radio optimization.

They should however be spending lots of time thinking about cacheing and minimizing interaction with servers. Developers in general have gotten too comfortable with unnecessary traffic on Desktop machines. There are too many devs who have been focused on speed as the limiting factor for mobile apps, when it fact should be cost. The end user is paying not only to acquire your app, but also to use the mobile provider's transport.

We should take every opportunity to reinforce the importance of minimizing mobile traffic to only what is necessary. Just because you can use a 4G connection to move a ton of data doesn't mean you should.


It seems like lunacy to me to use a smartphone on a connection where you're charged using usage-based billing. My provider throttles me when I'm over my "cap"...


I disagree because there are fundamentals, highlighted by the author, that a lot of experienced mobile developers know already through trial and error that should be acknowledged up front.

For instance, doing single calls to APIs for multiple pieces of information instead of a bunch of calls. If you design for these up front, you can then head the problem off before it even becomes a problem. It's not premature because users expect it. They do notice network lag, they do notice battery usage, and they will give you one star on the app store or play store, etc. I've literally seen this play out in real life. It's no different than making sure your website loads quickly IMHO. You still have to do all those other things you listed to worry about, but this isn't to be glossed over and addressed when you realize people are noticing.


Indeed. This isn't about premature optimization, but about preventing the equivalent of what I've seen happen when you give desktop developers GWT and get a public facing website with a 3 MB JavaScript app that polls a hard to generate 5k document every second just to check if a posted job has completed or not.


Prefetching is often an element of a pleasant user experience. It also can bring difficult UI problems with it (when is information too stale to display, etc), but that just means it's not easy.

It feels like so many apps are designed by people who never have experienced a bad connection, and just assume that their app can download everything it needs quickly as soon as it starts up. What you end up with is bad experiences when your connection is slow or has high latency.


Modern phone batteries last a fraction of the time they used to - you could get phones with 2 weeks life in the days before smartphones. We've got used to short lives but devs should aim to keep power demands down.

Radio power consumption is very significant. It could well be more than the screen takes - it's not just the RF amplification, but also the computation required to generate the modulated RF in the first place.


Not only are you helping the battery, but prefetching a lot of data, is also a good way to structure your app. It leads to better User Experience in the app if implemented correctly. If you start relying less on On-Demand connections in your app, you can separate UI code and Network code much more easily, which speeds up the UI and enables/improves offline functionality. Doing it in an IntentService on Android also allows for implementing very specific scheduling strategy.


Maybe a good mobile app can determine its optimization/ux strategy depending on what level the battery is currently at. I am not sure whether apps can query for battery levels though.


They can. A number of well-built apps pause background data sync if battery is too low.

However, changing behaviour based on battery level is trickier than it seems.

* The point of reducing energy usage is to leave the user more juice for the actions that are most important to them. It could be that your app is the important one.

* Stopping expensive operations at 20% or 30% battery level means that 70-80% was already wasted. If there are good ways of optimising energy use, they should be applied at all times, and prolonging the time until you reach 20%.

* The appropriate threshold depends on what the user imagines doing for the rest of the day. If I'm travelling to another city, I disable background data in the morning to make sure I still have battery left for phone calls in the evening after heavy use of maps during the day. If I'm travelling to the office, it doesn't matter, since a charger is nearby. Predicting "time to next charging" and "use of other apps before charging" automatically for context-aware app behaviour is non-trivial.


At least on iOS you can indeed. It's available from UIDevice.

And if you're doing something that sucks up battery, you should know about it and act accordingly.

If you're not doing heavy lifting, It's just not something you should have to think about as an app developer, there are so many other areas to focus on to keep users happy.

Apple has a few WWDC sessions that talk about it I think.

Here's what they have to say in their "performance tuning" section:



The dev bytes cited in the article are specifically focused on Android development. Android provides some interesting framework classes that help with making requests efficiently to the network, both from a power and speed standpoint. A lot of these APIs are really underused (e.g. SyncAdapter) in apps right now, so Google clearly wants more devs to use it since it requires less thinking.


i would like to place a bet that these measurements are made in an environment where the only app running is your own.

that said, this is hardly new knowledge. or is it? i thought there had been articles about it a couple of years back, and even though i would consider this advanced common sense there is a high chance that i would take this optimization into account right away.


The last thing I'd want is for my apps to each be downloading several MB of data they think I might need in the next few minutes. I can easily control how much battery life is remaining on my phone by plugging it in. I can't control how much of my data plan an app is using, except by uninstalling the app.

Besides, in my experience the biggest drains on battery life aren't data transfers, they're (a) having the screen on, especially when bright, and (b) being slightly out of range of a cell tower, and constantly dropping and reacquiring a 3G connection. The latter turns my phone into a hand-warmer and chews up my power.


> The last thing I'd want is for my apps to each be downloading several MB of data they think I might need in the next few minutes. I can easily control how much battery life is remaining on my phone by plugging it in. I can't control how much of my data plan an app is using, except by uninstalling the app.

Prefetching isn't applicable to every situation, but in some circumstances it makes apps more enjoyable to use. The trick is to analyze and understand the high volume usage patterns for your app and to target those for improvement. If you see that the vast majority of your users will take a next step to view content that must be web loaded, it may make sense to prefetch when the wi-fi or cell radio is already on so you can piggy back on the high power state and avoid waking the hardware.

> Besides, in my experience the biggest drains on battery life aren't data transfers, they're (a) having the screen on, especially when bright, and (b) being slightly out of range of a cell tower, and constantly dropping and reacquiring a 3G connection.

As someone who has encountered apps with bugs that do constantly make or maintain network connections, these kinds of bugs are a much more severe drain on battery compared to the screen. We're talking about bugs that are constantly powering your wifi or cell radio hardware 90% of the time. This will kill your battery in no time, screen on or off.


Foreground prefetching aside, there are controls that let users disable background data prefetching/syncing. In fact, it's even one of the buttons on the standard "power control" widget.

While the screen is definitely a huge power drain, it's easily controlled by the user. Poorly behaved apps will drain battery regardless of what the user does, and definitely _can_ be a huge drain. These used to be a lot more common when Android first came out -- many of these apps got better when users got better visibility of battery usage and started complaining to app developers.

And in general, we're talking kilobytes of data here. Not megabytes. Prefetching is good for metadata and text content; image content and other large assets should be an opt-in feature. (Android's "News and Weather" app was a good example of how to do this right.)


That control, like the permissions approval dialog, is a blunt tool. Sure, I can turn background data completely off, or I can let my apps run wild with it. There's no middle ground and no per-app control (unless the app developer provides one.)

KB of data could be reasonable, depending on how often its being fetched. But the OA specifically mentioned retrieving several MB of data for use "during the next few minutes." If that's an app that doing some kind of news feed to an always-running widget, and it's grabbing images and stories in case I might want to read them, it can really add up.


It's funny I came here to mention how the handshaking from constant dropped connections and cell location changes should be accounted for. I've had to consolidate multiple calls into single calls in order to increase the network performance of mobile applications. It wasn't until I actually sniffed what was happening that I noticed the handshake issue as I walked around, went down into the subway, hit alleys, etc.

As for prefetching, I think that should be a user setting. I've had to pre-fetch and cache data because the content was enormous and the backend was out of my control. But I do agree that a user should be able to decide themselves how the device uses resources and I try to push this into every app. Give the user a choice if they feel inclined.


> I can easily control how much battery life is remaining on my phone by plugging it in.

Except when you can't. Are you really always near a power outlet? There have been plenty cases where I just haven't been able to charge my phone and have had to turn it off rather than use it for what I want.

> I can't control how much of my data plan an app is using, except by uninstalling the app.

Depending on where you are, get an unlimited data plan. I know not every country has these, but most do.


I'm in the US. My data plan is capped when I'm on my carrier's network, and unlimited (and much more expensive) when I'm roaming. Typical US carrier.

This highlights a good point: it's not just the number of bytes of data, it's the cost per byte. Many data plans have an explicit cost per byte, but even ones that don't will have an opportunity cost: if your app is sucking up a portion of my bandwidth, you're preventing my other apps from retrieving data. Even if my data was free, my bandwidth is limited.


> (b) being slightly out of range of a cell tower, and constantly dropping and reacquiring a 3G connection. The latter turns my phone into a hand-warmer and chews up my power.

For this reason I'm pretty happy that my Android has an option to force 2G network for battery savings. Gives me 50% better battery life with no noticeable speed difference.


Yeah, most of the time my Android is on a WiFi network (office or home) and the cell data connection is turned off. My battery can last a pretty long time if I'm not away from my WiFis much and I don't have the screen on full brightness.


Actual title "If You're Programming A Cell Phone Like A Server You're Doing It Wrong" is a much more accurate statement, and a different topic.

On topic, this is why I prefer native (or at least non-webview) apps. The control over the connection and app lifecycle provides a better experience and better analytics.


Great article, made me explicitly aware of something I had at the back of mind as a web developer.

Another point that struck me is that when I briefly used Android more than a year ago, there was a nifty tool that showed which app drained what %-age of the battery. If I am having troubles with my battery life on Android, I am likely to uninstall (or not use) an app that consumes more than its fair share of the battery.

I haven't seen an equivalent statistics pane in iOS, I wonder whether Apple will introduce it at some point to encourage app developers to author more efficient apps.


The problem with the battery statistics is that even though an app that is causing battery drain it might not be registered with that app. Imagine an app that sends a 1kb probe message every 15 seconds in the background thus keeping the radio alive the entire time (i.e. never allowing it to idle). I believe that the battery drain caused by the radio usage would count towards "Android System" and not the app when in reality the app would be shortening your phone battery lifetime significantly.

Assigning battery usage to apps is a hard thing as there are so many grey areas. OTOH you will notice apps that are stuck in CPU expensive loops that will not terminate, however this is not that common in my experience.

The Android battery statistics are nice but they usually don't provide me with a useful answer to the question: Which app is really draining my battery.


Since Android is an open system, would it be possible to write an app that runs on a phone and forces applications to batch transfers?

I'm thinking of a "limiter" app that runs locally on the phone, and disables TCP/IP transmissions every 18 minutes out of 20. That way things like email you can still get relatively quickly, but the radio's only on 10% of the time.

Developers and users both being aware of the problem and trying to fix it might mean their solutions collide. E.g. a badly behaved app that tries to access the network once every 30 seconds will always get through during the 2-minute window, but a well-behaved app that only hits the internet every 10 minutes might miss the window entirely.

So maybe the limiter should intercept outgoing TCP/IP connections, lie to the initiating app and tell it the connection happened but the remote end isn't sending any data, then connect the outgoing app to the real remote end when the radio switches on. Of course if there's an application-controlled timeout interval on the connection, it'll trigger and the 10-minute-interval app would miss the window.

You might get around that problem by using whatever UNIX signal Ctrl+Z generates, to lock up an app entirely when it tries to open an outgoing connection, then resume it when the radio's on. Of course, this'll probably lock the app's UI too, so it'll become totally unusable. Although if the limiter acts like a screensaver and goes away whenever the user's recently given any touch or keyboard input, that doesn't really matter, does it?

Or maybe Google should make an API where an app can say, "I want to make a connection, and I'm willing to wait if turning the radio on right now wouldn't be good for the user."


You can do similar to what you want right now: pass to AlarmManager flag ELAPSED_REALTIME, and your event won't be delivered, until device is waked up.


My phone (Android 2.3) shipped with something similar. I don't know how to find it on more modern phones, but it's probably there.


Shameless plug for Couchbase Lite, our embedded database for iOS and Android, with built-in sync. Let us optimize the network traffic so you can write features.

Dev info: http://mobile.couchbase.com


The author talks about Google Cloud Messaging to avoid polling. I've been researching the topic and gave it a try with websockets without much success. TCP connections tend to hang on cell tower switching and I suspect TCP keepalives keep turning the radio on.

I really wonder how to implement push to client on the html5 platform. I even wonder if this is possible at all for now.


cell tower switching is transparent to TCP except for delayed packets. TCP keepalives are off by default and the default interval is 2 hours.


> Every decision you make should be based on minimizing the number of times the radio powers up.

It doesn't have to be "every decision" if you architect the app to not do CRUD-over-mobile-data and do sync instead.

This not only improves battery life, it has a strong positive effect on perceived performance and interactivity.

Fortunately, there will soon be a book all about that (ISBN 1118183495).


The article is specific for native apps. Is there something I can do to bundle requests together on browser-based sites? Does it matter there? I asked stackoverflow but nothing came of it yet: http://stackoverflow.com/q/18909185/1253312


Given the "preload any data the user might need in the next 2-5 mins" statement, how do people suggest we handle live streaming of updates (eg commentary on a sports event)? Presumably websockets are ok as it's more of a "push" model for the network than a "poll" model?


GCM is the recommended solution for anything that's trying to "push" data to Android: http://developer.android.com/google/gcm/index.html

We (Android) usually don't recommend developers try to implement push themselves. Using GCM allows the Android servers to schedule and collate data transmissions, so push messages from different apps get sent as part of the same transmission. Anything you roll yourself won't be able to do this.


What about for a webapp, even an offline one? Is there a hook into GCM from Javascript or something like that?


I suppose you could use the Java-to-JS bridge, if you're using WebView inside a native app.

However, there's not really any "mobile-friendly" push solution available as part of HTML5 right now. I wish there was. There's "GCM for Chrome" (http://developer.chrome.com/apps/cloudMessaging.html), but that's both Chrome-only and only available on the desktop.


> Presumably websockets are ok as it's more of a "push" model for the network than a "poll" model?

This is seems incorrect - as I understand it, a websocket holds open a TCP connection indefinitely, which would require the radio to stay in its active state as long as the websocket remained open.

See also: http://stackoverflow.com/questions/4456407/iphone-keep-webso...


I think the radios are down at the IP level and have no concept of a TCP connection, so an open but idle connection won't keep the radio from shutting off.

For example, Apple recommends that VoIP apps on iPhones keep a TCP socket permanently open and use it to receive notifications of new calls and similar:



This is outside my area of expertise, but I've heard that there's a keepalive timeout for open sockets set by the cell carriers. (Again, don't know the details.) So, there is a small cost to keep a TCP connection open, in order to avoid this timeout.

Also, as I mentioned in my previous comment: Even if keeping the socket open is free, every time you send a packet there's a cost -- both for the actual data transmission, as well as a medium-power state the radio enters for a bit before really going to sleep. And given the number of apps that use background data, background traffic on a phone can get quite chatty. This is why it's important to synchronize background activity across apps. If you implement your own push channel, there's no way for the OS to do any sort of synchronization.


I can't speak for websockets, but holding a TCP connection open indefinitely does not require that the radio stay in its active state. On iOS, setting the kCFStreamNetworkServiceTypeVoIP property on the connection allows the radio to return to the idle state and reawake when it receives data on the socket. Android may have similar APIs.


Wouldn't the radio have to wake on a timer to scan? How else will you notice there is data waiting on the other side of the connection?


The radio is in a low power idle state, the same state used to wait for incoming phone calls and text messages.


That's true, although I guess it's mostly listening as opposed to sending, which is possibly better? Although I realise there's probably some kind of "are you still there?" message from the cell tower.


How does the power usage of the wifi radio compare to the cell radio? Does it use more or less power?


Vastly less. I can save a few percent of battery life an hour just by using wifi instead of the mobile radio for data.


WiFi is supposed to be around the power of 2G, while 3G consumes a lot more power than either.


So what happens in a situation like long polling?

Suppose no data is being transferred for 60 seconds ... is the radio kept on at maximum power? Or does it power down in between? Or does it turn off and break the poll (abort the TCP connection)?


Oh, shit.

    apt-get remove apache2 php5 ...
And here I thought running Debian (for ARM cpus) on your phone was cool. Apparently I'm Doing it Wrong.

Puns aside, good article. If everyone did this (for example caching for 2-5 minutes ahead) things would run a lot better! Next time I'm going to code an app, I'll read through everything on this page first.


Just another bugle call for developers and users to forfeit more of their rights to our mighty big data and cloud land owners. Is your app ready for the "Internet of Things"?


A few months ago I'd have been happily hitting the downvote button along with everyone else, mumbling to myself about how crazy you sound. Today, however, I wish you weren't being downvoted because you're half way to being right.

I strongly doubt the intent of things like Google Cloud Messaging is to intercept all your data and siphon it off to the NSA, GHCQ, and others. GCM is an incredibly useful service that has been designed to make it easier to write apps that use less power by eliminating polling. When you don't have to poll you can keep the mobile radio in a low power state. But the trouble is, you are indeed passing data (possibly even sensitive data!) through Google data centers when you're using GCM and it's equivalents.

The tech industry seems to trust Google less today than it did this time last year in light of the Snowden leaks, and quite rightly so.


I'm not even considering that perspective. I'm just wary of the neofuedalist movement of technology in the last ten years or so, fueled by the mobile platforms being subject to regulatory capture.

The fact that data being captured by intelligence communities already shows you who owns this "land" and considers the "natural resources" theirs for the pillaging, this is a great example of the kind of neofeudalism this proposed platform is building. We just rent the internet, the data that it generates is for the kings and gods.

The fact that these so called mobile platforms leave so much complexity to be managed by the developer just goes to show that these so called platforms are simply land grabs and don't actually offer anything to the developer except his software locked down to a single platform. They don't need to offer any service, just a rent collecting system for their captured land.


of course, if you program for anything like you do for "real man hardware" - by which i mean targetting hardware and not a software platform e.g. a games console or a proper native app, part of the 'web stack' etc.... you never have a problem like this. :P

eventually you realise that C isn't fast enough and you can make it faster, but nobody else cares because they spunk clock cycles and bytes everywhere like they have billions of them (which of course they do... :D)

...and suddenly every language and platform is solving problems you never had because you just weren't a bad programmer to start with, and instead of solving your problems they are just tying your hands to prevent other people from shooting themselves in the feet.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact