Clearly, "thin client" was hyped and fizzled out, because it isn't quite what people want. They don't want a "thin client" to the cloud. A pure thin client is pretty dead when there's no network or the server's down. A "Fat Cache" would retain all of its functionality and the user's most used data. It would also have most of the advantages of thin client, namely invisible centralized administration. It would also ameliorate all of the disadvantages.
iCloud, Chromebook, and Kindle Fire/Silk Browser are all moves in this direction.
The focus of a computer user should be their data. This includes apps. The apps should also live in the cloud and just appear on all the user's devices. In fact, the user's application "session" should be in the cloud too. I should be able to start a session on a recipe app on my smartphone in the living room, then walk to the kitchen and just seamlessly continue my session on my tablet. No reopening the tablet version of the app, no repeating a search for the same recipe. At most, just one user operation to "open in tablet."
All these form factors should just be seen as interfaces to our data, just instances of a "fat cache."
The idea of migrating an app session between devices is going to be pretty difficult in most cases because people use different apps on different devices to access the same data; Twitter is an extreme example but there are plenty of others.
They do right now because they have to. Different apps/methods are best on different devices, and there's not enough thought to moving interfaces between devices. But if users didn't have to, and the mechanism worked well, they'd love it.
What do you do when the storm takes down your cable provider?
Or when you are beyond your cell phone carrier's signal?
And you have a project to get out the door.
Frankly, I've only had about 10 minutes of total, unexpected and unavoidable network downtime in the last few years of my life. With a home internet connection and a smartphone in my pocket, if my power goes out I can work away online for hours and hours. Power outages and network outages happen so infrequently that if we have reasonably good offline apps and Fat Caches, we're good to go. And not only that, but we're approaching a place where can continue working almost uninterrupted if both the main power and the network go down.
If my session and my app are in the cloud and the cloud is unavailable, my session and app are not available.
And what happens to my apps if I stop subscribing to the cloud for a few months? I still run Quickbooks 2004, ADT 2004, and VW 2008 for my business - will there be that kind of application persistence? [PS: I also use Sketchup 6 because of .dwg support and the ability to import into VW 2008]
In other words, my business processes work, and moving crap to the cloud is an unnecessary expense. I don't need some jackass MBA deciding how I should work based on a survey of iPad users.
Actually, that transparency is exactly what will make or break this scheme. If know you can't make it transparent, then you shouldn't try. But the companies that succeed (probably Apple and Google) would make billions.
At some point, old data is going to have to be pushed out of that cache, and if you don't make it clear to users they're never going to know what they can and can't run.
That's why it's a >fat< cache, as opposed to a thin client. If you have a big enough cache, then it won't happen too often. It doesn't work absolutely all the time, but if it works almost all of the time, customers will be happy.
And if it doesn't work, and your company also makes the hardware, then when the customer comes to the store, you tell them they need to buy a machine with more memory. Cha-ching.
Bullshit. When stuff doesn't work people are often unhappy as hell. And a computer that just deletes old stuff, it will really piss them off.
You've just demonstrated you don't understand what's being discusses here.
I don't think so. You seem pretty attached to the idea of the behavior of CPU caches, while steadfastly refusing to extend your own metaphor to persistent storage. Odd.
And if one has a Terabyte of data, how is putting it in the cloud and accessing it at web speeds better than a hard disk at bus speeds?
With a processor cache, populating the cache predicatively is far easier due to the limited domain of alternatives, the logical structure of instructions, and trillions of cycles per core per hour available for testing alternative predictive schemes.
On the other hand, a user may ask for a rarely requested file once every ten to thirty fortnights, or never. And predicting that request would require parsing a joke told on WJMZ's morning show fourteen minutes ago.
The point is that a cache, even if it is a fat cache, will remove files eventually. Files the user may be expecting to find.
Of course, if someone offered a service where the ultimate storage backup lost data, it would be idiotic. However, this as certainly not my suggestion. Rather, you selectively applied the functionality of hardware caches to the "Fat Cache" idea, while fabricating ridiculous semantics that go against your own hardware analogy. (Where the cloud would be the hard drive.)
This, as the comment to which I responded proposes, creates an issue because the state of the device is unknown to the user and any particular piece of data or particular program may or may not be available at any time.
Perhaps I am just dense, but I do not see how this is advantageous compared to local storage. In other words, just because you feed it a case of Krispy Creames, a thin client is still a thin client.
For a given amount of storage, a Fat Cache may beat a thin client. But it's turtles all the way down, if the amount digital information people store keeps increasing, i.e. the first box of ten 720k floppies lasted me the better part of a year.
Reorient your thinking. The central thing is no longer the "device" it's the user's account and sessions in the cloud. The user's data never goes away, it's just that access to it is sometimes good and sometimes not so good. Mostly, it's good.
Nowadays, if a user's mobile broadband is broken, they just think of going somewhere with better reception. In a few years, when a seldom used piece of data has fallen out of the cache, they'll just think, oh well, I'll get back to it in a bit. (Or, if it's really important, they can select "Keep Available" so it'll be there next time.)
However, if you go back to a time before mobile broadband was commonplace, and you tried to introduce someone to it in a bad reception zone, they would've been very "meh." That seems to be where you are.
A strong case can be made that having access to mobile broadband intermittently is an improvement for the user over the way things worked before.
I haven't seen anyone even attempt to make a case that intermittent access to one's data and apps is an improvement over the current systems.
In other words, why would anyone willingly switch to this model? And among the unwilling, how does it better meet the needs of business than a thin client or other conventional solutions?
Having to select "Keep Available" requires the user to accurately predict the future value of access, cache space for it, and adds another step to the process of saving stuff for later use.
It's not availability for next time that's the issue. It's availability now, when I need it that matters.
A better analogy is a allen wrench sitting in a toolbox out in the garage. It's not at hand when the disposal is stuck, but access is predictable. What you are proposing is that someone comes by once a week and swaps out the tools in the toolbox based on which one's you have been using most recently.
Again, where is the advantage to the user?
If the "fat cache" still lets you work in a disconnected state (such as iCloud would let you), I don't see this as a problem.