It's not just the cpu... it's the screen, and the antenna. A screen needs to produce a certain number of lumens of light to be readable. That provides a theoretical limit. An antenna needs to transmit clean signal over a certain range. That takes power, too. And the size of the device limits the size of the battery. How many joules of energy can we store in that physical space?
We may be coming up on limits for mobile devices much harder and faster than we think. Of course, IoT devices can be better off by not supporting a screen, but communication is still a problem.
Computation, on the other hand, is still no where near the theoretical limit. We literally use more than 10,000,000x the theoretical limit to erase a bit of data.
Edit: I am aware that claim sounds hilarious when expressed in this simplified way. Don't fret, it's hard science.
Yes, energy/cycle is the ultimate limit, and imposes bounds on what we can do with computers. There are other factors, including dissipating of waste heat. Both those concepts are joined in Gene Amdahl's observations. He noted that in paralellisation, though you could extend this to any efficiency improvement, your efficiency bound is imposed by the unaddressable task component. For parallelisation, it's the nonparallel component, for energy efficiency, it's the non-CPU power draws. He also noted that everything ultimately comes down to a matter of plumbing (including heat dissipation).
Screen tech has benefitted greatly from the transitions from paper => CRT => LCD (flourescent) => LCD (LED), and increasingly e-ink devices. Applications and devices which can utilise largely-static screens, where only state transitions require energising (as with e-ink and textual interfaces) have far less power draw than any emissive screen tech.
Other peripherals -- input/output, including keyboards, touch devices, antennas, ports, etc., also play a role. Physical keyboards have advantages over screens, though audio input/output might address size constraints (and impose numerous others...). The possibility of passive antennas (see the 1960s Russian "shield" audio bugging device in the US Embassy in Moscow) could offer other options, assuming that a fixed station would have a power budget mobile devices wouldn't.
There's been a lot made of small and adhesive solar PV innovations. I see the real use of that as offering the ability for very small computational devices to power and/or recharge themselves opportunistically when and as ambient light is available. Incorporating both battery storage and PV into designs -- say, an adhesive data-tracking and reporting label which could be attached to shipping packages -- as layers of PV, battery, compute, and antenna, would allow for very cheap monitoring of conditions (location, shock, temperature, humidity) for deliveries, at a cost of pennies, and without extensive engineering considerations.
(The hackability of such devices might ... pose interesting considerations.)
We're approaching a state where, theoretically, any device of more than a few dollars' value could have some integrated computational capacity, whether you like it or not.
The only thing to revolutionize mobile battery life will be a better battery.
No, not all of today's features were available, but you're dragging goalposts. The basic concepts existed. I used them. They worked. Often better than equivalents today, though not always.
With much less power you can run a notebook today. And I don't how much energy it takes to host one Google Sheets in use, but it seems that a search doesn't consumes that much (http://techland.time.com/2011/09/09/6-things-youd-never-gues...). Even the typical user consumes just a bit every month (http://inhabitat.com/infographic-how-much-energy-does-google...)
If network syncing is interfering with user text input, something is wrong with that design.
Edit: The historical analogy would be like if you were entering data (offline) into your Lotus cells, but your computer would keep freezing up so the modem could reconnect. Somehow they kept those decoupled back then!
It's also huge computation inefficiency. Because suddenly to enter the text we involve network calls (hence the network lag), drawing in the context of a browser, the DOM, a JS engine, and other crap.
I can generally avoid this by dropping to a dedicated editor app (by preference, now that I've got Termux, Termux-API, and clipboard access: vim from a bash shell).
But one thing I can assure you this isn't is network lag.
Jevons says that as the cost of a thing falls, the induced demand rises. This means, computationally, both that we see computers on devices that didn't previously have them (washing machines, dishwashers, toasters, light bulbs), and that the devices on which we have computers and software, most particularly personal computers (and take your pick: Windows, Mac, Linux, Android, whatever), stuff additional features in "because they're cool" (Gresham's Law), consuming the additional cycles.
Given various pathological performance failures (swapping, iowait, network lag, etc.), the performance issues can actually get markedly worse.
Using old software on new hardware is often a dream. That's among the reasons I'm so fond of the WindowMaker desktop -- 1989 software (at heart) on 2017 hardware.
And then there is the fact that life encodes and persists information. There was a recent HN thread that got me interested:
I did read Gleick's "The Information" but was disappointed it didn't dig very deep into the concept. I got further following links on Wikipedia.
That said, there are a lot of subtle assumptions wrapped up in the Landauer limit (related to the 2nd law of thermodynamics and the notion of 'erasure') and my personal opinion is that the limit is easily overrated before those subtleties are appreciated.
It seems that at some point the unit for powering computers would then have to be matter, since there's such an incredible amount of energy stored per newton.
I'm not sure what you're trying to say but that sentence doesn't make sense.
What would a reversible computer look like? Could I run Firefox on it?
The Feynman lectures on computation have some good sections on reversible computation that I would recommend.