Also note the sharp contrast between the concept of "scrap computers" that have no identity whatsoever and the reality of today where everyone's mobile computers (phones, tablets, e-readers, laptops, etc.) are all intensely personalized. A lot of these errors are based on trying to naively map pre-digital behaviors and use cases onto a world of ubiquitous computing, rather than imagining entirely new methods of interaction and use. Why bother duplicating hardware to maintain two separate pieces of information locally when you can simply have two files open in different tabs in an editor, or two google docs, or emails, or what-have-you?
"need" sets a very low bar for criticism. You can argue that you don't need pretty much anything.
The question is really whether there is a way for many device-instances to have a sufficiently large increase in cost-benefit ratio compared to using a single device-instance.
For example, I find it plausible that multiple tablets could be quite useful if there was a good way to coordinate the displays and interactions between them (e.g. for proof reading and arranging material). Such benefits would require changes in the OS UI, probably.
Software vendors have no incentive for making it easy for people to exchange data between devices and software they have no control over. "Ubiquitous computing" gives them at best little to no business value, nowhere near enough to justify effort to make their applications support it, and they see anything that makes it easier to extract data from under their control as a business threat.
These days, big companies like to create cloud platform to enable a limited "ubiquitous computing" for themselves - that is, you can work on something on multiple devices, as long as you're using their specific platform and have Internet connection always on.
The technical building blocks need to happen at OS level and they could, but OS vendors won't bother either, knowing that applications won't make proper use of it. Commercial applications will try to maximize the amount of data they suck in, and minimize the amount of data they let out. It's fundamentally the same reason we don't have universal APIs for websites, why websites fight so hard against people who try to make them interoperable (see also, the Google Duplex HN thread).
I really want to see ubiquitous computing happening, but I can't see how it's going to, given that the software industry will reject it even if it was handed to them by OSS people ready and working, on a golden platter.
That isn’t needed to get the multi-device UIs for “multiple tablets could be quite useful if there was a good way to coordinate the displays and interactions between them”; the devices could all be from the same manufacturer. For example, games where not all players get the same information (e.g. scrabble, cluedo, many card games) could have the shared UI on a tablet, with players using their phones for looking at their cards/stones.
- Technology should create calm.
It's as inspiringly true, as inherently rare.
Calm IMO is most often a byproduct of long learning process and craft. This brings know how. In these days of impatient ever shifting grounds, you only get stress.