
The Computer for the 21st Cenury (1991) [pdf] - doener
https://www.lri.fr/~mbl/Stanford/CS477/papers/Weiser-SciAm.pdf
======
InclinedPlane
I remember reading this article in Scientific American at the time. It seemed
like it had a lot of things right even then, although it also seemed to
contain some hopelessly off base ideas as well. One of the interesting big but
close misses here is the idea of "tabs". You see this in the Star Trek: The
Next Generation (and related) TV series as well from the same time frame. This
idea that ubiquitous computing would mean that we would have lots of computing
devices that each were not very sophisticated. The reality is that it's
actually shockingly easy to make very capable computing devices (e.g.
smartphones, tablets, etc.) and for the most part you only need one of each
variant (e.g. an ebook optimized one, a tablet, a smartphone, etc.) instead of
many.

Also note the sharp contrast between the concept of "scrap computers" that
have no identity whatsoever and the reality of today where everyone's mobile
computers (phones, tablets, e-readers, laptops, etc.) are all intensely
personalized. A lot of these errors are based on trying to naively map pre-
digital behaviors and use cases onto a world of ubiquitous computing, rather
than imagining entirely new methods of interaction and use. Why bother
duplicating hardware to maintain two separate pieces of information locally
when you can simply have two files open in different tabs in an editor, or two
google docs, or emails, or what-have-you?

~~~
jamesrcole
> _for the most part you only need one of each variant (e.g. an ebook
> optimized one, a tablet, a smartphone, etc.) instead of many._

"need" sets a very low bar for criticism. You can argue that you don't _need_
pretty much anything.

The question is really whether there is a way for many device-instances to
have a sufficiently large increase in cost-benefit ratio compared to using a
single device-instance.

For example, I find it plausible that multiple tablets could be quite useful
if there was a good way to coordinate the displays and interactions between
them (e.g. for proof reading and arranging material). Such benefits would
require changes in the OS UI, probably.

~~~
TeMPOraL
This is totally doable and could have been done 10+ years ago. Except it
didn't happen, because unfortunately, the problem isn't technology. It's
business.

Software vendors have _no_ incentive for making it easy for people to exchange
data between devices and software they have no control over. "Ubiquitous
computing" gives them at best little to no business value, nowhere near enough
to justify effort to make their applications support it, and they see anything
that makes it easier to extract data from under their control as a business
threat.

These days, big companies like to create cloud platform to enable a limited
"ubiquitous computing" for themselves - that is, you can work on something on
multiple devices, as long as you're using their specific platform and have
Internet connection always on.

The technical building blocks need to happen at OS level and they could, but
OS vendors won't bother either, knowing that applications won't make proper
use of it. Commercial applications will try to maximize the amount of data
they suck in, and minimize the amount of data they let out. It's fundamentally
the same reason we don't have universal APIs for websites, why websites fight
so hard _against_ people who try to make them interoperable (see also, the
Google Duplex HN thread).

I really want to see ubiquitous computing happening, but I can't see how it's
going to, given that the software industry will reject it even if it was
handed to them by OSS people ready and working, on a golden platter.

~~~
Someone
_”Software vendors have no incentive for making it easy for people to exchange
data between devices and software they have no control over”_

That isn’t needed to get the multi-device UIs for _“multiple tablets could be
quite useful if there was a good way to coordinate the displays and
interactions between them”_ ; the devices could all be from the same
manufacturer. For example, games where not all players get the same
information (e.g. scrabble, cluedo, many card games) could have the shared UI
on a tablet, with players using their phones for looking at their
cards/stones.

~~~
TeMPOraL
In that case there's exactly nothing preventing such UI from happening. But
that's not "ubiquitous computing", that's just "a multiplayer game".

------
bugs_bunny
This is a prescient article given the state of computing at the time it was
written. In reading it I also wondered if this explains Steve Jobs insistence
of using the name iPad (he was pressured to change the name because the
default reference of "pad" was to something rather different). But, the author
really blows it at the end (the last two paragraphs) when he claims that
ubiquitous computing will mean the decline of the computer addict and of
information overload.

~~~
thought_alarm
It's probably how the ThinkPad got its name.

~~~
kalleboo
Before the ThinkPad there was the THINK pad
[https://www.youtube.com/watch?v=9A9GBTFrFTQ](https://www.youtube.com/watch?v=9A9GBTFrFTQ)

------
godelmachine
I was reading the paper published on Project Jaquard and this post was cited
as one of the inspirations. Furthermore, it was also said that Project Jaquard
fulfils some of the predictions made in the paper.

------
joshmarinacci
I had the great fortune to meet Mark Weiser when I was an intern at PARC in
the late 90s. He was an amazing visionary.

------
agumonkey
from his wikipedia page:

\- Technology should create calm.

It's as inspiringly true, as inherently rare.

Calm IMO is most often a byproduct of long learning process and craft. This
brings know how. In these days of impatient ever shifting grounds, you only
get stress.

------
tambourine_man
typo in title: Cenury

------
miobrien
Fascinating article.

