

The Second Coming: A Manifesto - dejan
http://www.edge.org/3rd_culture/gelernter/gelernter_index.html

======
tl
_42\. To send email, you put a document on someone else's stream. To add a
note to your calendar, you put a document in the future of your own stream. To
continue work on an old document, put a copy at the head of your stream.
Sending email, updating the calendar, opening a document are three instances
of the same operation (put a document on a stream)._

Sounds like a gap in Unix's file metaphor. Would it be reasonable to copy a
file to /proc/mail/coworker@company.com/ and have the right thing happen?

 _57\. Nowadays we use a scanner to transfer a document's electronic image
into a computer. Soon, the scanner will become a Cybersphere port of entry, an
all-purpose in-box. Put any object in the in-box and the system develops an
accurate 3D physical transcription, and drops the transcription into the cool
dark well of cyberspace. So the Cybersphere starts to take on just a hint of
the textural richness of real life._

I think mobile phone cameras will be the "scanner" here.

~~~
eru
> Sounds like a gap in Unix's file metaphor. Would it be reasonable to copy a
> file to /proc/mail/coworker@company.com/ and have the right thing happen?

Sound's reasonable. Though you would probably have to set write-new-files-only
permissions in that directory, to capture the semantics of email. Reading and
modifying files and even listing directory contents should not be allowed.
Perhaps it would be better to make proc/mail/coworker@company.com a special
file that you can write to, instead of a directory?

~~~
tl
It's an interesting argument, but if you make it a file, will the system know
that:

cat file1 > proc/mail/coworker@company.com

cat file2 > proc/mail/coworker@company.com

are two separate messages?

~~~
eru
Good question. I don't know. That why I put the question mark in there.

------
jcromartie
I'm having a heated argument with a person who _refuses_ to read past point
30, because he understands files and hierarchies and thinks that anybody that
can't organize their computer files is just hopeless and shouldn't bother.
He's the kind of person keeping this whole thing back.

The future of user interfaces is one without named files and directories, I'm
100% positive about that. Real-life hierarchies of information are more
organic than named things belonging to one category. Ideas are not grouped so
rigidly, and neither should last year's tax return (it's financial, and it's
personal, and it's governmental, and it's a form, you created it about a year
ago, etc. etc. etc.).

Pieces of information should be known for what they are, not what they are
named. Naming them is tedious.

~~~
RyanMcGreal
I know it's not a new idea, but I wonder if the time has come when the
filesystem model is replaced with the database model. I realize that a
relational database simply trades one hierarchical structure for another, but
it seems that the schema-free and arbitrary structure of a "NoSQL" model might
make a better fit for the globs of data that comprise a modern storage drive.
The key is in having a powerful and intuitive enough searching algorithm that
you can actually find what you need.

~~~
bpyne
Relational databases are not limited to hierarchies. You can create complex
graphs if you want. It's all how you model your information. They were created
due to the limitations of hierarchical and network db systems.

There was an interesting discussion on Artima ("Software Development Has
Stalled") last week that touched on the schema-free idea. One of the
participants suggested that computers could use inference to determine data
relationships. (In his words, the computer is hypothetical and would need to
be infinitely fast.) Presumably by computer he meant some agent doing machine
learning on top of the "infinitely fast" hardware.

If your idea is along the inference line, then it would be interesting to see
how far inference could go. However, if you mean to simply encode
relationships into each program (or into a library), then it's just a re-
invention of a schema.

~~~
RyanMcGreal
Maybe a solution will lie in adaptive neural networks.

------
pashields
This is all a little bit out there for me, but Gelernter is a great mind. I
still think Linda will take over the world. The most interesting part of this,
for me, was this quote: "The computer mouse was a brilliant invention, but we
can see today that it is a bad design. Like any device that must be moved and
placed precisely, it ought to provide tactile feedback; it doesn't." It's only
funny as hype builds for the ipad and mobile internet devices, almost all
based on touch screens. Even before that, trackpads and membrane keyboards
were all slow and steady movements away from strong tactile feedback. The
"minority report" interface that people love so much is even further along,
there is no contact at all. None of this is to say Gelernter is wrong, but, if
he is right, I wonder when we will see the movement towards increased and/or
more intelligent tactile feedback.

~~~
jcromartie
At least touch interfaces remove a level of abstraction that currently exists
between moving a mouse on a desk and moving a pointer on the screen. A touch
screen with tactile feedback (the iPhone already has vibration... more apps
should make use of this) would be nearly ideal.

~~~
stcredzero
Prediction:

Touch interfaces will comprise the new end-user interface vocabulary. As
happened with mouse/keyboard/overlapping windows, a de-facto standard set of
UI conventions will be established as a new mainstream.

Apple will be a trailblazer of this with the iPad. They will be too greedy
with their IP and their closed corporate policies will drive the creation of a
different but similar standard that they do not directly control. Hopefully,
it won't be Microsoft that jumps in and does it this time.

~~~
eru
Interesting.

Could you put any money, some odds and a timeline on your prediction?

------
dejan
The most interesting part imho is the lifestream he talks about, as well the
implied "eternity service" for information management. I wonder besides Linda,
why more of these ideas didn't see implementation. I am assuming the Internet
and the open source collaboration came too late.

Linda for sure will see reincarnation, but can't stop thinking how funny it is
that old technologies are coming back in big fashion, as "innovations". PG
might be right, all languages seem to be evolving into Lisp. :D

~~~
dpezely
Regarding a return of Linda, I couldn't help but note similar themes when
first learning about Amazon's Dynamo and later, the Apache Cassandra Project.
I'm not saying that there's a perfect match, but when one's imagination has
academic roots in the era of Linda, it was easy to see a connection through
that lens.

Anyone else see the potential there?

At the very least, it influences how I might use something like Cassandra (or
dbm files for that matter), and I've been planting the meme for others to
approach their designs that way too.

------
troystribling
19\. The power of desktop machines is a magnet that will reverse today's
"everything onto the Web!" trend. Desktop power will inevitably drag
information out of remote servers onto desktops.

20\. If a million people use a Web site simultaneously, doesn't that mean that
we must have a heavy-duty remote server to keep them all happy? No; we could
move the site onto a million desktops and use the internet for coordination.
The "site" is like a military unit in the field, the general moving with his
troops (or like a hockey team in constant swarming motion). (We used
essentially this technique to build the first tuple space implementations.
They seemed to depend on a shared server, but the server was an illusion;
there was no server, just a swarm of clients.) Could Amazon.com be an
itinerant horde instead of a fixed Central Command Post? Yes.

Now the network 'edge' is physically the the carrier owned physical location
nearest the end user. This suggests that edge will be moved to the physical
location of the end user. Financially this makes sense since it would move
some of the carrier hardware and compute costs to the customer thus lowering
the service costs as well as decreasing latency. It could be the next step in
development of the compute cloud.

~~~
dejan
I believe he is talking about large scale, generalized p2p and finally merging
the web app benefits with supercomputer's power of our desktops, coordinating
power across the net, and having information "fly around" in cyberspace.

sexy.

~~~
troystribling
Yes I agree he is talking about p2p. ISPs could start deploying devices on
customer premises to provide caching, storage, compute and wireless network
services. In densely populated areas this would essentially be a local CDN.

------
stcredzero
_1\. No matter how certain its eventual coming, an event whose exact time and
form of arrival are unknown vanishes when we picture the future. We tend not
to believe in the next big war or economic swing; we certainly don't believe
in the next big software revolution.

2\. Because we don't believe in technological change (we only say we do), we
accept bad computer products with a shrug; we work around them, make the best
of them and (like fatalistic sixteenth-century French peasants) barely even
notice their defects — instead of demanding that they be fixed and changed._

Collorary: If you want aim for something world-changing, you need to aim high
enough that most Slashdotters won't immediately get it and say "meh."

I think this level may be out of the reach of most startups now. (Google,
Facebook, and Twitter are notable exceptions. But I think the low-hanging
fruit has been largely picked from the current set of user interface kit.)

------
wglb
Very fascinating vision. It isn't clear when this was written. Also, for some
odd reason, number 15 is missing.

~~~
ableal
June 2000, deducing from the dateline of the quoted/linked NYTimes piece,
which mentions "last week".

Amusing typo in the collected reactions: "Feeman".

------
skmurphy
I was struck by this paragraph in John McCarthy's detailed critique that
followed the manifesto and thought it pointed out a real issue for many
systems:

    
    
       "Unfortunately, the making of computer systems and software is dominated 
       by the ideology of the omnipotent programmer (or web site designer) who 
       knows how the user (regarded as a child) should think and reduces the 
       user's control to pointing and clicking. This ideology has left even the 
       most sophisticated users in a helpless position compared to where they 
       were 40 years ago in the late 1950s.

------
niels_olson
57: Compiz actually has a small hint of part of this with their rain effect. A
"butterfly effect" would be awesome. References to chaos theory included.

------
ableal
I appreciate #30-34, on file names, which opens up with

 _30\. If you have three pet dogs, give them names. If you have 10,000 head of
cattle, don't bother._

The count of "eight possibilities" on #34 puzzles me a bit.

I also sympathize with the (time) streams idea further down. Archeological
strata are a very efficient concept for storage, and not the worst possible
one for retrieval ...

~~~
danparsonson
Actually, I thought the cattle analogy missed the point - surely the reason we
don't give names to each of our 10,000 cattle is that we don't care which one
is which? I don't find that to be true of data. Also, file names are often
used to describe the data to which they relate; in that sense, we actually do
give names to each of our 10,000 cattle - we call them 'cows' - it just
happens that we use the same name for each individual because they are
identical at the abstraction level we care about.

Or maybe it's me who missed the point? :-)

~~~
ThomPete
actually it's not true of cattle either.

We care about the individual cattle when we process it.

I.e. when it goes to the slaughterhouse we want to make sure it's not
contagious, that it's not a kid cattle etc.

We might even group them males or females.

The answer is context not categorization.

~~~
RyanMcGreal
Processing cattle is a kind of filter/map operation: walk the list of cows,
assessing each one on a set of criteria, and then processing the cows that
meet the criteria.

