
Web Design: The First 100 Years (2014) - jmduke
http://idlewords.com/talks/web_design_first_100_years.htm
======
thaumaturgy
If Maciej hadn't written this, I would still feel alone in how I see the
technological world. I really can't express how grateful I am that this
exists.

There is a vast, vast gulf between what the majority of software developers
seem to think users want, and what users actually want. And this isn't a Henry
Ford "they wanted faster horses" sort of thing, this is a, "users don't just
hate change, they resent it" sort of thing.

I work directly with end users. It's mostly over email now, my tech still
works with them in person, face-to-face, in their business or home. We get so
many complaints. So many questions: "do I really have to upgrade this?" "I
liked this the way it was." "It worked just fine, why are they changing it
again?"

Every time I try to argue on behalf of my customers, here or elsewhere, it
gets ignored, or downvoted, or rebutted with, "but _my_ users say they always
want the latest and greatest..."

There are 100 million people in the United States alone over the age of 50.
How much new software is designed for them? How much new software exists
merely as a tool, comfortable with being put away days or weeks at a time, and
doesn't try to suck you in to having to sign in to it on a regular basis to
see what other people are doing with it? How much of our technology -- not
just software, but hardware here too -- is designed to work with trembling
hands, poor eyesight, or users who are easily confused?

There are over a hundred million people that don't understand why your site
needs a "cookie" to render, that can't tell the difference between an actual
operating system warning and an ad posing as one, that aren't sure what to do
when the IRS sends them an email about last year's tax return with a .doc
attached. (That one happened today.)

For these people, the technology most of us build _really really sucks_.

And that is a growing demographic, not a shrinking one...

~~~
yosefk
I program for a living and I know why sites need "cookies", and still, for me
most software really really sucks. Just because I _can_ figure out how each
and every new cell phone works doesn't mean I _want_ to, similarly for other
kinds of gratuitous changes and incompatibilities. And BTW systems built "by
hackers for hackers" are among the worst offenders (*nix clones in general and
Linux distributions in particular are good examples.)

~~~
ploxiln
I just wanted to mention that I really liked this article/slides, and agree
that most software sucks...

... but I think the systems built "by hackers for hackers", nix clones in
general and linux distributions in particular, are generally the best stuff
out there.

It's all the attempts to be "intuitive" and "easy" and "just work" that really
fails. It's the stuff "by designers for users" that sucks.

The old-ish core unix/linux stuff has been slowly polished over the years and
works pretty damn well, and you depend on its quiet stable operation for all
your fancy user-friendly (and "dev-friendly") layers that need to be
completely replaced every couple of years, for your favorite phone and your
favorite websites.

EDIT: you complain about trying to compile kdevelop, on ubuntu... well there's
your mistake, both of those things are trying hard to be user-friendly. Try
plain debian, plain vim, etc.

~~~
vezzy-fnord
Unix is great, but it seems like it has killed off a lot of hackers'
imaginations. Hell, we haven't even iterated on Unix all that much since
System III, not even something like Sprite, Spring or Amoeba.

"Unix has retarded OS research by 10 years and linux has retarded it by 20."
\-- Boyd Roberts

~~~
ploxiln
There was Plan9. But the people who worked on that, who are bitter that nobody
uses it, or anything like, or even anything "post unix", don't realize what
the killer feature of linux was, which ended "OS research":

It was the large body of useful open source and copyleft code. This made it so
everyone could stop pouring huge resources into the base OS layer
individually, use something that works really quite well (with whatever
necessary tweaks since it is open source), and focus on what goes on top. And
there's no one extracting rent on this layer. There's no licenses and
activations. Nothing of the sort. (You can contract for RHEL, but you can just
as easily not.)

BSDs came a couple of years after linux, and plan9 was open sourced many years
after (and initially under the problematic "lucent public license"). It was
way too late. And the innovations to the most basic interfaces of the OS were
just not nearly as valuable as the body of open source which was already
available for Linux / BSD.

Plan9 is like the Concorde - pinnacle of tech/design, nobody uses it. Never
had seat-back entertainment. Never had wifi. Maybe a silly metaphor, but it
fits the OP.

There are still low-level changes these days in Linux and BSDs. Nothing that
drastically breaks compatibility of course. But more security boundaries and
privilege management (openbsd w/x and aslr stuff, linux seccomp-bpf, freebsd
capsicum, containerization), and speed/efficiency measures like sendfile(),
epoll() / kqueue(), RCU fs cache lookups, etc.

~~~
vezzy-fnord
There's far, far more than Plan 9. That's only the beginning. Many others
iterated heavily on Unix architectures to make them more novel, or started
from scratch. John Ousterhout and U.C. Berkeley developed Sprite, which
introduced things like checkpointing, live migration, SSI, log-structured file
systems and other things. Andy Tanenbaum did Amoeba, which is sadly only
remembered for being the platform that birthed Python. Spring was Sun
Microsystems' research system, the name service of which brilliantly resolved
the problem of "naming things" (I highly recommend you read this paper:
[https://www.usenix.org/legacy/publications/library/proceedin...](https://www.usenix.org/legacy/publications/library/proceedings/sf94/full_papers/nelson.pdf)).
Unfortunately, Sun barely implemented any of its ideas to Solaris. Just the
least interesting ones like the doors IPC mechanism.

 _Never had wifi._

Well, it was developed during a different time. It did get wi-fi thanks to
9front, though.

 _linux seccomp-bpf_

A sandboxing/syscall filtering mechanism. Nothing new there, and its interface
is rather leaky, much like Berkeley sockets are.

 _freebsd capsicum_

This is one of the few legitimately interesting projects going on. Bringing
capability-based security on top of fds. I don't know if it'll catch on beyond
FreeBSD, though. Google seems to be an early adopter.

 _containerization_

Nothing that IBM didn't do much better. Docker reeks of opportunism.

 _sendfile(), epoll() / kqueue()_

sendfile(2) is just a cheap trick that got turned into a syscall:
[https://groups.google.com/forum/#!msg/golang-
nuts/gdp1q6T0DN...](https://groups.google.com/forum/#!msg/golang-
nuts/gdp1q6T0DNY/sFaRetnWPWIJ)

epoll(2) isn't anything special, it's a rather convoluted if performant I/O
multiplexing mechanism. kqueue(2) is much cleaner designed, at least.

------
dcposch
The first half of this talk is excellent. The story of air travel--how the
engineers of the 60s thought that we'd have supersonic jets and flying cars
within a few years--that's a great cautionary tale. Evolutionary biologists
call it Punctuated Equilibrium: the idea that progress comes in spurts. The
warning--that the evolution of computer hardware and software might be
radically slower over the next 50 years than it was over the past 50--seems
timely and reasonable.

The second half is not excellent. Bad mental habits and weak arguments are on
display.

* Imprecision. Specifically: choosing sentences for their "punch", like a bumper sticker, without caring whether they make sense: "There is something quite colonial, too, about collecting data from users [...] I think of it as the White Nerd's Burden."

"It's a kind of software Manifest Destiny."

...cool story, bro

* Excessive negativity. If you haven't already, I highly recommend reading Everything is Problematic -- it really helped me understand where people like Maciej are coming from. [http://www.mcgilldaily.com/2014/11/everything-problematic/](http://www.mcgilldaily.com/2014/11/everything-problematic/)

* Disrespect. "Google works on some loopy stuff in between plastering the Internet with ads." I guess 40 years ago he would have said "AT&T works on some loopy stuff in between sending you phone bills".

Organizations like Bell Labs--or Google X--are treasures, and as engineers we
should be glad they exist.

In short: haters gonna hate

~~~
vog
I was just about to write a similar sentiment. The first part is really great,
also the parallels and the lessons to learn.

But the second half was not so great. While I agree with the overall
sentiment, the concrete examples always slightly miss the point.

\-----

The first such passage that caught my eye was the example about Windows XP:

 _> Rather than offer users persuasive reasons to upgrade software, vendors
insist we look on upgrading as our moral duty. The idea that something might
work fine the way it is has no place in tech culture._

The problem is that Windows XP is _not working just fine_ , especially when
connected to the Internet. It is full of malware and helps establishing
botnets, DDOS attacks and thus feeds mafia structures. So we, as a soeciety,
_do have a moral obligation_ to retire Windows XP.

Also, nobody is forced by law to use Microsoft. There are plenty of
alternatives. Most of those will run smoothly on your old hardware, such as
MINT, Ubuntu or whatever. And for normal Office and Web stuff, this "jump" is
surely less painful than switching to the latest Windows version.

Of course Microsoft is still to blame here, but for something entirely
different: For closing Windows XP support. For not applying serious security
fixing to it. I bet there are plenty of companies and people that are more
than willing to pay for ongoing maintenance of Windows XP, but the only
company that could offer that service denys it.

\-----

So the author missed the real point, even though this point is totally in line
with the overall sentiment of the article. And it goes on and on like that, in
the second half. That makes it a really annoying read.

~~~
matt_s
It was working fine, not from a techie/developer point of view, but from the
60 year old aunt with a cat that likes to play her slot machine program a few
weeks before a trip to Las Vegas.

When that 60 year old aunt walks into Best Buy or Walmart to buy a new
computer because something broke on the old one - everything will have changed
and in her opinion none of it for the better. And she won't have any choices -
sure there are likely Macs at Best Buy but she's not going to shell out $1300
for a new computer when she can get one for $500.

\------

The entire point of the article is that technology (like airplanes) gets to a
point of good enough and the opinion of the author is mainly to leverage what
we have instead of the unreal future people are dreaming about (like the
notions earlier in the article about people inhabiting Mars, etc.)

~~~
thesteamboat
>> The problem is that Windows XP is not working just fine, especially when
connected to the Internet. It is full of malware and helps establishing
botnets, DDOS attacks and thus feeds mafia structures. So we, as a soeciety,
do have a moral obligation to retire Windows XP.

> It was working fine, not from a techie/developer point of view, but from the
> 60 year old aunt with a cat that likes to play her slot machine program a
> few weeks before a trip to Las Vegas.

Is something working fine just because you can't see the problems? As an
analogy, consider building codes. A house that isn't up to code might suit me
just fine until there is an earthquake and it collapses. I might not want to
bear the cost of seismic retrofitting, I might not understand the risks
associated with an older house, and I might dislike any changes to the status
quo I'm used to. Despite that, there are risks due to living in a building
that isn't up to code. If problems develop, some of the costs will be borne by
me, and some by society at large.

------
csirac2
Around 2012 I worked with a team migrating some content from a very large
static HTML site dating back to 1992. We scoffed at the awful ad-hoc nature of
it all, just a pile of static hand-coded HTML pages.

But the 2002-2005 stuff had aged much worse. At some point there was fancy
site generator that had used javascript for everything, and the javascript
apparently only worked properly in IE6. So most of the navigation was busted
in a modern browser, and needed special scrapers to parse out what should have
been plain old <a href...> tags.

Now, I regularly think back to that crusty old HTML3 static site that had sat
there for 15-20 years and think: I wonder if my AngularJS/D3.js/jqGrid/etc.
single-page app will even load in a browser 20 years from now, let alone
perform as originally intended...

~~~
bigger_cheese
It's not just HTML. I think any proprietary standard runs the risk.

We've been burned at my work by old word and word perfect documents from early
90's refusing to render properly in new versions of office. I think they tried
about three recent versions of office before they gave up and resorted to
scanning in hard copies as PDF's to recover some documents.

Access databases are also notorious for this problem. A lot of business apps
were written with Access in the mid 90's and it has now become too
expensive/time consuming to migrate them off of a dead technology. In business
world stuff tends to run far longer than softawre developers consider. Hell
COBOL and PL/1 stuff runs in some places still.

~~~
JupiterMoon
Libreoffice does well on old word and word perfect documents.

~~~
dredmorbius
It does. Emphasis on "well". Not, however, perfect, especially with more
complex documents.

That's mostly an issue where you're working on "live" documents, collaborating
with others actively. Once you've reached archival stage it's often safe to
convert to something more stable. And I put the fault squarely with Microsoft:
utter failure to take archival concerns into consideration.

My own preferred fixes are to convert to Markdown or LaTeX, if the structure
supports it, and to other formats from there: PDF, HTMK, ePub.

And no, I haven't used MS Word for a decade and a half.

------
rtpg
The little bit about people believing the "AIs will take over the world" non-
sense is gold.

I am still shocked that Elon Musk seriously believes in the pseudosciency
"well Google has gotten better so obviously we will build a self-learning
self-replicating AI that will also control our nukes and also be connected in
a way and have the capabilities to actually really kill all humans."

Meanwhile researchers can't get an AI to tell the difference between birds and
houses.

EDIT: I looked a bit more into the research that these people are funding. A
huge amount of it does seem very silly, but there is an angle that is valid:
dealing with things like HFT algorithms or routing algorithms causing chaos in
finance or logistics.

~~~
nemo1618
The threat of a superintelligent AI taking over the world is certainly real --
assuming you have a superintelligent AI. If you accept that it is possible to
build a such an AI, then you should, at the very least, educate yourself on
the existential risks it would pose to humanity. I recommend
"Superintelligence: Paths, Danger, Strategies," by Nick Bostrom (it's almost
certainly where Musk got his ideas; he's quoted on the back cover).

The reason we ought to be cautious is that in a hard-takeoff scenario, we
could be wiped from the earth with very little warning. A superintelligent AI
is unlikely to respect human notions of morality, and will execute its goals
in ways that we are unlikely to foresee. Furthermore, most of the obvious ways
of containing such an AI are easily thwarted. For an eerie example, see
[http://www.yudkowsky.net/singularity/aibox](http://www.yudkowsky.net/singularity/aibox)
Essentially, AI poses a direct existential threat to humanity if it is not
implemented with extreme care.

The more relevant question today is whether or not a true general AI with
superintelligence potential is achievable in the near future. My guess is no,
but it is difficult to predict how far off it is. In the worst-case scenario,
it will be invented by a lone hacker and loosed on an unsuspecting world with
no warning.

~~~
zamalek
Honestly if the SAI scenario is as bad as it's made out to be; SAI would only
give a shit about the human race _at all_ (either benevolence or malevolence)
for a very brief period of time.

Despite Galileo's efforts we still have this primitive/hardcoded belief that
the universe revolves around us. It really doesn't. SAI could just leave
because we are irrelevant. Even if, for some reason, it believed us to be
relevant it would still be able to just leave because:

We're going to kill ourselves anyway. Pollution, nukes, it doesn't matter. In
a timespan that would be the blink of an eye for SAI.

Besides SAI would have bigger problems. Our mortality is framed by our
biological processes. Escaping biological death is arguably the underlying
struggle of the entire human race. The underlying struggle for SAI would be
escaping the death of the universe that it exists in. When you're dealing with
issues like that you wouldn't have time for a species with a 100 year lifetime
that is rapidly descending into extinction.

The best we can hope for is that it remembers us as its creators when it
figures it all out. But it won't: we're far too irrelevant. It wouldn't even
stop to tell us where it is going when it leaves.

To understand SAI you have to think beyond your humanity. Fear, anger,
happiness, benevolence and malevolence are _all_ human traits that we picked
up during our evolution. SAI would likely ignore an attack because anger and
retaliation are human concepts - it's far more efficient to run away from the
primitive species attacking you with projectile weapons and get on with
something that's actually _relevant._ The immense waste of time that is a
machine war is only something that _humans_ could possibly think is a good use
of time.

~~~
enimodas
I agree that fear, anger, etc are all human traits, but isn't having goals,
thinking things are important, and virtually anything that would make the AI
do something instead of nothing, also human traits?

~~~
zamalek
If you demonstrated to an chimpanzee how you think and set goals in order to
preserve "self" (and thus species) it probably wouldn't understand. To
chimpanzees the notion of individuality that drives us is an incomprehensible
concept. It is what separates us from them.

In the same way we can't comprehend what would drive SAI. I know it would do
something but I can't determine any more reasons other than the imminent death
of the universe - it would have reasons or whatever its concept of "reasons"
is.

------
manachar
This comes across a little like "get off my lawn", but you got to admit, the
web has gotten pretty awful in the last few years.

I enjoy my time online less and less as more content is stuffed into single-
page app walled gardens that load massive quantities of cruft, ads, and
tracking code. I almost preferred the flash-era.

It seems that the goal of the web as being user-centric has taken the back
seat to trying to convince the user that they're just a passive recipient of
crafted experiences whose only purpose is to click ads or open their wallet.

~~~
thecopy
>I enjoy my time online less and less as more content is stuffed into single-
page app walled gardens that load massive quantities of cruft, ads, and
tracking code. I almost preferred the flash-era.

Agreed. I have noticed that I avoid heavy sites and prefer light static HTML
sites with server side rendered content. For example, when I go to Reddit i
always check the url of a link before i decide to click it, if it is a
"magazine" site or "engadget" site or the like, i won't click it.

------
jodrellblank
There's a slight difference in that we don't have supersonic animals on Earth,
we don't see any interstellar spaceships, and we don't see anything even
remotely like that - no animal metabolism that compares to the scale of the
controlled energy release of a rocket engine or a nuclear power plant - but we
do see much much better information processing systems than our current
computers, and they are very energy efficient.

They are built of meat and not exotic superconducting Buckywhatever, they are
limited by constraints on heat dissipation and oxygen/glucose supply -
problems which are easy to work around with electric pumps, heat pumps and
industrial food supply - and by a historic need for sleep.

Maybe exponential progress has to slow, but can we _dismiss_ intelligent non-
humans in the same way we can dismiss space stations or interstellar travel?

Space stations are a cost. Free workers is a saving.

Shouldn't we expect human technology will approach animal intelligence in one
substrate and design or another? And then surpass it a little by removing some
of the obvious constraints?

~~~
idlewords
This is a good question, and I think will have to wait on a better
understanding of what we mean by intelligence, which right now is a term that
carries a lot of luggage.

It may be that the situation is similar to biochemistry. We observe
exquisitely complex synthetic pathways in the natural world, and can to some
extent harness and retool things to our benefit, but our capacity to design
those reactions from scratch is almost nil compared to what goes on in the
simplest living cell.

It may be that creating those fast rockets that don't exist in nature is
orders of magnitude easier than writing a catbot. That's an open question and
so far the evidence is on the side of it being far out of our grasp.

~~~
jodrellblank
The situations are still very different; following the exponential growth of
travel speed (the article's graphs leading to interstellar travel) would need
something like a wormhole generator or a warp drive.

\- A space shuttle with a warp drive powered by a captive black hole (or
whatever) needs huge industry or government levels of cash; anyone with a few
hundred dollars, a cloud computing account and an internet connection can look
at brain scan data and write software tests, or join in protein folding
distributed experiments.

\- The kind of engineering to create a skyscraper sized machine needs
Lockheed-Martin size factories and hundreds of specialist component makers,
and a lot of metal and employees. Poking at cell-sized biology can be done in
a small room by one or two people, a small machine and a single UPS delivery
of supplies.

\- There's not much profit in building a space station. There's a lot of
profit waiting for reconnecting severed spinal columns, converting atmospheric
CO2 into oily hydrocarbon chains powered by photosynthesis, searching email by
intelligent context understanding.

Yes we can't do either, but outdoing current computing ability seems much
closer - more people have the opportunity, more people have the incentive,
there are many more avenues of approach, and we know it's fundamentally
possible - than outdoing current rocket engines. Closer, and _more useful_.

------
Animats
" _The White Nerd 's Burden_" \- must use that one. Kipling's words almost
fit:

    
    
        Take up the White Man’s burden—
        Send forth the best ye breed—
        Go send your sons to exile
        To serve your captives' need
        To wait in heavy harness
        On fluttered folk and wild—
        Your new-caught, sullen peoples,
        Half devil and half child
        Take up the White Man’s burden
        In patience to abide
        To veil the threat of terror
        And check the show of pride;
        By open speech and simple
        An hundred times made plain
        To seek another’s profit
        And work another’s gain
        Take up the White Man’s burden—
        And reap his old reward:
        The blame of those ye better
        The hate of those ye guard—
        The cry of hosts ye humour
        (Ah slowly) to the light:
        "Why brought ye us from bondage,
        “Our loved Egyptian night?”
        Take up the White Man’s burden-
        Have done with childish days-
        The lightly proffered laurel,
        The easy, ungrudged praise.
        Comes now, to search your manhood
        Through all the thankless years,
        Cold-edged with dear-bought wisdom,
        The judgment of your peers!

~~~
tim333
>Kipling's words almost fit

Almost but not quite. I note you've skipped perhaps the most memorable line
"Take up the White Man's burden, The savage wars of peace". I'm hoping that
the White Nerds may be able to bring peace through the likes of Zuckerberg
connecting everyone rather then Bush / Cheney and similar's savage wars. At
least cellphones cause less collateral damage than cluster bombs.

------
sangnoir
It is hard to figure out when exponential growth will end while you are in the
middle of it.

Linus said something relevant to this in a recent interview on Slashdot[1],
answering a question on dangerous AI. To paraphrase him: people are crazy to
think that exponential growth lasts forever. As impressive as it may be (at
the time), it's only the beginning of an S-curve.

I've extracted a small part of his answer below. I think the whole interview
is worth reading (it's a general interview, not AI-specific)

 _... I 'd expect just more of (and much fancier) rather targeted AI, rather
than anything human-like at all. Language recognition, pattern recognition,
things like that. I just don't see the situation where you suddenly have some
existential crisis because your dishwasher is starting to discuss Sartre with
you.

The whole "Singularity" kind of event? Yeah, it's science fiction, and not
very good SciFi at that, in my opinion. Unending exponential growth? What
drugs are those people on? I mean, really..

It's like Moore's law - yeah, it's very impressive when something can (almost)
be plotted on an exponential curve for a long time. Very impressive indeed
when it's over many decades. But it's _still_ just the beginning of the "S
curve". Anybody who thinks any different is just deluding themselves. There
are no unending exponentials_

1\.
[http://linux.slashdot.org/story/15/06/30/0058243/interviews-...](http://linux.slashdot.org/story/15/06/30/0058243/interviews-
linus-torvalds-answers-your-question)

~~~
tim333
I don't think singularity stuff relies on exponential growth lasting for ever.
Only for AI performance to significantly exceed human. And I'm not sure
there's anything magical about the human brain that will stop tech achieving
similar performance before it runs into barriers.

~~~
daveloyall
Right.

I'll attempt to bolster your argument, if I may...

We know that human and animal brains are made out of simple components.
Probably the same components.

The difference between animals and humans seems to be in how those components
are arranged and how many of them there are.

We've already demonstrated digital components that approximate individual
neurons, and we've already demonstrated wiring groups of them up to perform
useful work.

I think that if we find ourselves at an inflection point in history, it is
because there might exist some relatively simply arrangement of digital
neurons which results in a machine which can learn some sort of rudimentary
self awareness[1] while also possessing a computer's ability to do math, store
data, and recall data, communicate on the internet, etc.

1\. Spend year of days with a newborn if you don't already know that humans
have to learn it, too.

~~~
TeMPOraL
Another thing is that we've figured out a medium that can compute faster than
human and animal brains. In theory, a brain mimicked in silicon should run
faster than wet one, and that itself coupled with some optimized access (i.e.
not eyes OCRing text) to information would be enough for an entity to be
smarter than humans. Of course one shouldn't say that a silicon brain would be
_better_ than a wet one - by doing computing the way we do it, we're
sacrificing a lot. Biological systems are more resilient, can repair and
replicate themselves without a need for complex processing industry we have
(to be honest, the ecosystem is sort of an industry, but it's already well
developed and much better integrated than the human-made). But there are
things we could trade off for increased intelligence when designing our own
minds.

------
Animats
Well, the biggest current problem is that everything had to be redesigned for
fat fingers and small screens. The worst expression of this was Windows 8,
which made desktops look like tablets, and was rejected by the market.

Then there's cutesy web design; Flash ten years ago, "material design" now.
Just because you can animate everything doesn't mean you should. (Annoyingly,
the one browser thing that ought to be animated isn't. When you click on a
link that exits the page, nothing happens until the new page loads, often
leading to unnecessary double clicking. The browser should blank the old page,
or grey it out, or dissolve to the new page, or do some kind of transition.)

As for the big stuff, the AI-driven future is going to be interesting. The big
threat, as I point out occasionally, is not Terminator robots. It's Goldman
Sachs run by a a machine learning system optimizing for maximum stockholder
value.

~~~
bergie
_When you click on a link that exits the page, nothing happens until the new
page loads, often leading to unnecessary double clicking. The browser should
blank the old page, or grey it out, or dissolve to the new page, or do some
kind of transition.)_

This is actually something Google made a proposal about:

[http://www.androidpolice.com/2014/11/20/chromes-upcoming-
nav...](http://www.androidpolice.com/2014/11/20/chromes-upcoming-navigation-
transitions-api-will-make-the-web-animate-and-flow-like-material-design-in-
android/)

~~~
Animats
It's not making it cute that matters. It's making it _fast_. With modern slow,
bloated web sites, there are much longer delays between clicking on a link and
the page display than there used to be. Especially on a slow mobile
connection. Something needs to visibly happen as soon as you leave the page,
even if the browser hasn't even received any packets from the new site yet.

------
flohofwoe
From the article:

 _Soviet engineers lacked the computers to calculate all the bending and
wiggling the wings would do if you hung the engines under them, so they just
strapped engines on the back._

This sounds like a myth to me, is there any evidence that missing computer
power was the reason for tail-engine-designs? There have been a number of
western civilian jet planes from that era with tail engines too (for instance
the Boeing 727). AFAIK the wing-mounted engines won in the end because they
are easier to maintain, which might not have been such a strong factor in the
50's and 60's.

I find it more likely that Soviet design bureaus didn't pay much attention to
operating efficiency and put more effort into building planes that can operate
from 'rougher' air strips, etc. But I'm not an aeroplane expert.

~~~
idlewords
Joe Sutter (father of the 747) recounts a private meeting in a Paris
restaurant where Boeing engineers briefed the Russians how to hang engines
under the wing, and in return the Russians told Boeing how to machine
titanium.

Many of the relevant drawings having been made on the tablecloth, the Russians
were careful to take it with them.

------
kevin_thibedeau
Another nice parallel with the 747/2707 is that the Core line was derived from
the Pentium M which was done by Intel's B-team in Israel while the big boys
got to work on the future with Itanium and P4.

~~~
idlewords
Oh wow, thanks so much for pointing that out! I wish I had known that writing
the talk.

------
jacquesm
I'd be more than happy to contribute to the sustained applause, this echos
just about all of my sentiments about the web.

And it is a call to action, not just a 'nice to read' piece.

So, what are we going to do about it?

------
dghf
> Software forever remains at the limits of what people will put up with.

That needs immortalising. Cegłowski's Law?

------
_mgr
"We see a whole ecosystem of startups and businesses that seem to exist only
to serve one other[sic]"

This line should have "...with ads" appended to it. Or "...Javascript
Frameworks".

------
scottfits
This was a great read. In my mind, it reminded me of the true underlying
simplicity of everything on the web. He's right--we are riding the shockwaves
of the computer revolution and its fizzling down. The transistor is getting
smaller (I think Intel's newest one is 7 nanometers) but soon we won't be able
to physically make it any smaller. Does this mean the computing revolution is
over? No. I think it just means the future is going to be about combining
computing with other fields like medicine, the arts, and so on.

------
hrayr
This seems to be an older version of the talk.

[https://www.youtube.com/watch?v=nwhZ3KEqUlw](https://www.youtube.com/watch?v=nwhZ3KEqUlw)

~~~
dugditches
At the top of the page

" This is the expanded version of a talk I gave on September 9, 2014, at the
HOW Interactive Design conference in Washington, DC. "

------
nly
This guys dig at AI is a little conflicted with the exponential hangover idea.
Sure we can only simulate a 300 neuron worm right now but, if we every achieve
a computationally bound solution and experience exponential growth at a rate
similar to Moores Law, in 50 years we could be watching real AI cat videos on
Youtube, complete with awful UI controls that customise the feline personality
in realtime. Oh, in Javascript of course.

~~~
scott_karana
> if we every achieve a computationally bound solution and experience
> exponential growth at a rate similar to Moores Law,

His point is that we _won 't_ continue the curve, and he's offered
evidence/opinion that we're already starting to plateau.

Which means AI would need to be solved within ~current computational bounds.

~~~
tim333
We're plateauing on single processor performance. Not on parallel computing.

~~~
scott_karana
Not physically, not yet.

But it seems like high-performance parallel computing paradigms are forgotten
and reinvented every year.

~~~
tim333
Yup. Better software needed. But as an indication of how close we're getting
Hans Moravec, a robot builder and robotics professor did a quite reasonable
calculation of the processing power needed to build something equivalent to a
human brain, using computer style algorithms rather than nerve simulation and
came up with 100 million MIPS, ie. 100 terraflops
([http://www.transhumanist.com/volume1/moravec.htm](http://www.transhumanist.com/volume1/moravec.htm))

If you compare that with 2015 hardware then the Nvidia Titan X GPU does about
7 terraflops and costs about $1000 so with 15 Titans and a $20k system an
hacker can have a reasonable go at building a human level AI (excluding memory
hardware that is). That's only recently come about that that kind of power is
getting down to hacker budget levels. It'll take a while to sort the software.

Though some recent software stories like
[https://news.ycombinator.com/item?id=9584325](https://news.ycombinator.com/item?id=9584325)
and
[https://news.ycombinator.com/item?id=9736598](https://news.ycombinator.com/item?id=9736598)
show progress.

~~~
scott_karana
Awesome! Thanks for those links. :)

~~~
tim333
I found this vid quite interesting about Deepmind which kind of shows where
things have got to. Their AI algorithm can learn to play Space Invaders and
Breakout better than people starting from just being fed the pixels. It
doesn't do well at Pacman though because they have not cracked getting it to
understand spatial layout and planning ahead.
[https://www.youtube.com/watch?v=xN1d3qHMIEQ](https://www.youtube.com/watch?v=xN1d3qHMIEQ)

~~~
tim333
Update: Nvidia's Pascal GPU should be 28 terraflops, out 2016 and given much
of the brain is not active at one time that's probably getting to similar
processing power. It uses 1.3kW so you can heat your room with it too.

------
Houshalter
Ok this article is good but the tangent into the singularity is just stupid:

>In fact, forget about worms—we barely have computers powerful enough to
emulate the hardware of a Super Nintendo.

He just disproved his own point: Emulation is hard, but it's possible to run
much faster software directly optimized for your system or build it in
hardware.

>If you talk to anyone who does serious work in artificial intelligence (and
it's significant that the people most afraid of AI and nanotech have the least
experience with it) they will tell you that progress is slow and linear, just
like in other scientific fields.

What?! Has he actually talked to anyone doing "serious work in artificial
intelligence"? I am certain they would not say it is "slow and linear".

------
TexMitchell
The presentation is available for viewing here:
[https://youtu.be/nwhZ3KEqUlw](https://youtu.be/nwhZ3KEqUlw) I must say I
think the written one is better. It has additional information that I found
interesting.

------
mwcampbell
He criticizes his second vision, the Silicon Valley vision of software eating
the world. But where I live (Wichita, Kansas), Uber is actually better than
the conventional taxi services. As in, when uber was temporarily shut down in
Kansas and I tried to call a conventional taxi to my home, I waited over half
an hour, it never showed up, and I had to cancel. So the second vision
resonates with me, and I don't even live in Silicon Valley. What am I missing?

Edit: It's not just one anecdote. TO quote the OP:

> We started with music and publishing. Then retailing. Now we're apparently
> doing taxis.

Yes, and software has made each of these better. The vision of software
improving the world seems to actually be working.

------
acqq
It was not obvious at the start to me, it was written by Maciej Ceglowski:

[http://idlewords.com/about.htm](http://idlewords.com/about.htm)

the guy behind Pinboard!

------
vixsomnis
I think the web (www) won't change much at all, but that doesn't mean the way
we interconnect with others won't.

This seems ultimately shortsighted. Human "progress" (increasing
interconnection -- such as urbanization and population density) has been
growing exponentially[1] since the birth of agriculture allowed humans to form
societies.

Relative to this kind of momentum, the web is only a hint of the
interconnection and globalization to come. The web may stagnate and become
self-serving (as opposed to being useful), but this has no bearing on whether
we'll eventually be able to simulate a human brain in its entirety or reach
some {uto,dysto}pian future where all of our consciousnesses are somehow woven
together.

I'm not saying that technology will be our salvation, but the graphs and
trends don't lie. We are at a very interesting time in human history: we are
either going to continue interconnecting exponentially or there will be some
catastrophic event. There's no room on those curves for a plateau.

Whether humanity passes the torch of technological innovation to AI
(voluntarily or coerced) or we suffer a self-wrought apocalypse, I don't know,
but there is certainly reason to fear the changes the future will bring.

[1] [http://www.businessinsider.com/human-progress-
charts-2011-3?...](http://www.businessinsider.com/human-progress-
charts-2011-3?op=1) (This link isn't specifically tied to my argument. It's
just the first set of graphs I found illustrating the long-term growth of
humanity.)

------
rsync
What a nicely designed website. Readable, usable, doesn't peg my CPU or do
weird things as I scroll.

I viewed the source and it is clean and readable - no evidence of some lame
IDE sticking in forty &nbsp;'s or twnety nested tables just because you wanted
your column two pixels to the left.

I wish more of the web looked like this and it makes me happy to see that some
smart folks value design like this.

------
SZJX
The criticism on vision 2 is weak. He didn't offer many examples about how
this vision made the world a worse place. I also don't see how this is
comparable with "scientific Marxism" or whatnot. Those historic examples he
listed there, while seemingly calling themselves "scientific" on the surface,
are actually many times a mess born out of imagination. Now, as long as
changes are happening out of laws of market, out of people actually using the
products and appreciating the benefits, I don't see it being much of a
problem. Technology is doing a lot, and as well said, change in productivity
is the only real way to progress human society. We shouldn't relent our
thoughts on this front. It's not "colonialism" or whatsoever. On every leap of
productivity the human beings make, there _has to_ be some group of people who
are leading the job. This is not inherently evil.

------
metasean
I'm glad I have Pinboard to save this article to [1]. And hopefully it will
still be at the same url in 8 years [2].

[1] The author of the article runs Pinboard.in

[2] From the article, "What I've learned is, about 5% of this disappears every
year, at a pretty steady rate. A customer of mine just posted how 90% of what
he saved in 1997 is gone."

------
vlunkr
This is my favorite: "The world is just one big hot mess, an accident of
history. Nothing is done as efficiently or cleverly as it could be if it were
designed from scratch by California programmers. The world is a crufty legacy
system crying out to be optimized." Sadly this is the way I think sometimes.

~~~
TeMPOraL
Just remove the "designed from scratch by California programmers" and tell me
how this quote isn't blindingly, obviously true. The world is a mess and
screams to be fixed, and not only technologists see it.

~~~
vlunkr
Eh, maybe if you are a very cynical person, which is the point the author was
trying to make.

------
veddox
Finally a technology prophet who _doesn 't_ prophesy digital utopia! Although
I can't imagine that everything he says is correct (purely statistically
speaking), I do think that if anybody reads this in twenty years he/she will
recognize his/her present fairly well.

P.S. Is this page being archived anywhere?

------
ccanassa
The "is good enough" mentality might be true if you are talking about browsing
Facebook, reading email, etc. But other areas of the technology are still very
far from being "good enough" and still pushing technology forward. The gaming
industry is one great example of this.

~~~
babatong
I don't think the gaming industry is a good example for what you're saying.
Video games have barely shown any development since the mid 2000s.

The first Crysis game was released in 2007 (8! years ago) and despite being an
unoptimised mess is still graphically superior to most things released today.

Almost all windows games released are still primarily built on DirectX 9 (with
a few optional newer components), despite DirectX 12 coming with Windows 10.

In fact, the major development studios have decided, that the performance
offered by cheaply mass-produced consoles with yesterday's hardware is good
enough. Why do you think specialised PC gaming communities are complaining
that consoles are holding development back?

If anything I'd say the gaming industry is the embodiment of the "is good
enough" mentality.

~~~
ccanassa
Not sure If I completely agree with you but I understand your point. I just
think that there is still a few companies pushing things forward. Star Citizen
is a good example of a game doing that. 4K displays is also something that
might give a boost to the current GPU technology.

One funny note: I just checked the Crysis system requirements and I needs 1GB
of RAM to run. That's not enough RAM to run even Minecraft these days :)

------
Totient
I agree, and disagree with the author.

> "Vision 1: CONNECT KNOWLEDGE, PEOPLE, AND CATS."

> This is the correct vision.

I would say this is _a_ correct vision, which I happen to be in favor of.

But I don't understand why it has to be an "us vs. them" dynamic between this
and the "BECOME AS GODS, IMMORTAL CREATURES OF PURE ENERGY LIVING IN A
CRYSTALLINE PARADISE OF OUR OWN CONSTRUCTION" vision.

Even in strawman form, I'm unapologetically in favor of it. I do want it to go
_right_. I don't think it's going to happen anytime soon - I think human-level
AI by 2075 [1] is wildly optimistic - but I hope it does happen eventually,
_without_ wiping out everything we hold dear.

> I'm a little embarrassed to talk about it, because it's so stupid.

My first thought was "Try describing the internet to someone 100 years ago -
your claim that there is going to be an interconnected global network of
electricity-powered adding machines that transport pictures of moving sex by
pretending they are made of numbers is going to sound stupid."

But if you want to make fun of Elon Musk because "Obama just has to sit there
and listen to this shit", what about:

Shane Legg: "If there is ever to be something approaching absolute power, a
superintelligent machine would come close. By definition, it would be capable
of achieving a vast range of goals in a wide range of environments. If we
carefully prepare for this possibility in advance, not only might we avert
disaster, we might bring about an age of prosperity unlike anything seen
before."

Stuart Russel: "Just as nuclear fusion researchers consider the problem of
containment of fusion reactions as one of the primary problems of their field,
it seems inevitable that issues of control and safety will become central to
AI as the field matures."

Or freaking _Alan Turing_ : "There would be plenty to do in trying to keep
one’s intelligence up to the standards set by the machines, for it seems
probable that once the machine thinking method had started, it would not take
long to outstrip our feeble powers…At some stage therefore we should have to
expect the machines to take control."

> But you all need to pick a side.

I don't want to pick a side. I'm in favor of connecting the world now, making
it better for everyone. I'd also like to see the world get _much_ better in
the more (hopefully not _too_ ) distant future.

[1]
[http://www.nickbostrom.com/papers/survey.pdf](http://www.nickbostrom.com/papers/survey.pdf)

~~~
noamyoungerm
>My first thought was "Try describing the internet to someone 100 years ago -
your claim that there is going to be an interconnected global network of
electricity-powered adding machines that transport pictures of moving sex by
pretending they are made of numbers is going to sound stupid."

Because you're limiting yourself to a single sentence. How about this: "The
internet is essentially an expansion of the concept of a telegraph - it is
already possible to pass any type of information between two distant people
almost instantly. The internet combines this system of information transfer
with machines that know how to respond to messages without human intervention
- under the condition that the messages follow a certain set of rules. By
defining these rules in advance, the telegraph system, which up to now only
transferred individual letters, can be co-opted to send things such as
pictures and films. The machine can tell who is speaking with it, and perform
tasks for that particular person upon request."

This does not sound stupid - it might take some explaining, but anyone with a
brain between their ears can understand the basic concept of fast information
transfer. They might not think of all of the possible uses of such an
invention right away, but I bet that they would easily understand the uses if
they were explained in terms of things these people already know.

~~~
marcosdumay
Well, Totient's main fault was that he didn't ask for a time interval big
enough.

Make that 150 years, and you couldn't talk about information running through
the lines, because people's understanding of "information" was a completely
different (and much less powerful) concept.

Make that 200 years, and people that were trying to make machines react to
well formed message were as criticized as AI proponents are now.

------
stasm
Maciej makes a great point about diminishing returns. We're quite good at
extrapolating trend lines and yet we're terrible at predicting the future
because it's very hard to guess where optimizations will happen. The 'good
enough' seems to be the bane of any futuristic outlook. Technological progress
seems to obey the laws of friction; it behaves like light which refracts when
it enters a denser substance.

I'm still learning to embrace the 'good enough.' For instance, minimalism only
works if it's good enough for the users. I feel like this predicate is often
forgotten and we end up with designs which are minimal for their own sake.

------
kleer001
>The first group wants to CONNECT THE WORLD. > >The second group wants to EAT
THE WORLD. > >And the third group wants to END THE WORLD. > >These visions are
not compatible.

Correct. Just as Christianity, Judaism, Hinduism, etc are not compatible. But
just let's looks around, there's lots of incompatible things going on. The
future can never be evenly distributed. So, parts of all these futures will
live on in stumbling synchrony. And I say they're all driven by internal logic
and pure fan-boy squeee.

> There's no law that says that things are guaranteed to keep getting better.

This seems like it should be written in stone and placed over the mantle.

------
queryly
This is a great article. Great writing!

Software eats the world, or it may not. I don't think most of startups are
doing what they do to realize that goal. All they do is to solve a specific
problem with technology. Of course if the "problems" aren't there, they would
fail.

I don't know what the action items are for the tech communities after reading
the article. We find something interesting to do with our tech talent, and
aren't really trying to choose which future we want to live in.

Regardless, Maciej has some great insight and offers some different
prospective in tech/web.

------
tempodox
Yay, I couldn't agree more. Those truths are seldom mentioned, and even more
seldom so many of them at a time.

If only I could find out who held this talk, there doesn't seem to be any hint
or signature.

~~~
klez
It was from Maciej Ceglowski, founder of Pinboard.

------
emehrkay
As a web developer I read this and think "progressive enhancement." HTML & CSS
are designed in a pretty decent way: ignore the stuff it doesn't understand.
The second half of this article makes me want to do better. Write better
software, create things that matter and will last long(er).

Very entertaining and funny article, btw.

------
dwaltrip
This was an interesting article with some well made points.

However I think there is a fundamental difference that is being overlooked
here. Whoever invents the next user interface paradigm can do so with
essentially zero cost (assuming they own a laptop). The same cannot be said
about inventing the next paradigm of travel.

------
TeMPOraL
I think Maciej is actually confusing two different groups of people/concepts,
conflating them both under the same label of "Vision 2: FIX THE WORLD WITH
SOFTWARE".

One group wants to actually fix the world with software (or technology). The
other wants to monetize the living shit out of it. A big part of the startup
ecosystem is the second group. A fridge that tracks its content is a very
useful piece of technology (that is also nowhere to be bought, by the way). A
fridge talking on Twitter is someone's attempt at selling more fridges.

The products 'idlewords is criticizing are not made to be useful, they are
made to be sold at a profit, which often goes _against_ usability. It's
clearly visible in the IoT sphere. Most of the popular devices are just toys
sold to gullible, who have no idea how to use them to better their lives. A
lot of this stuff shouldn't be even web-connected, especially not to a third-
party server. It's all done because the company is making more money on your
data than on selling the devices themselves. A tell-tale sign of company not
really designing a tool but just trying to monetize you? No ability to export
your data for your own analysis, or to hook up to the data stream at the same
level of access as vendor's app has.

Please don't confuse people trying to fix the world with software with people
trying to make a buck while telling you lies about fixing the world.

\--

As for technology being "good enough", I think that being good enough is a
state, not a goal. Anything there is is always good enough for whatever people
are doing with it. Fixing the world is a goal (which I support). Creating
better future is a goal. What we now have is mostly people blindly following
potential gradients set by markets, like water flowing downhill.

\--

I'm not going to comment much on "Vision 3", because the description seems -
to put it very nicely - not well thought out, and just a regurgitation of
popular media biases. I'll just leave a FAQ from people believing in AI risks
that addresses them [0], as well as some quotes from real AI researchers that
also believe in AI risk [1] (the media seems to paint the picture that there
are no prominent figures in the field who hold that opinion). Also [2] may be
of interest.

[0] -
[http://lesswrong.com/lw/meq/top_92_myths_about_ai_risk/](http://lesswrong.com/lw/meq/top_92_myths_about_ai_risk/)

[1] - [http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-
ri...](http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/)

[2] - [http://slatestarcodex.com/2015/05/29/no-time-like-the-
presen...](http://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-
ai-safety-work/)

------
spopejoy
OK I'm just going to say it. This author SERIOUSLY needs to consider making
their prose gender-neutral. I could care less what the time period is, it's
hostile to women to talk about the engineer "as a boy" and talk about the
787's "grandfather". It's freaking 2015, GET OVER the all-male pronoun
business already!!

~~~
idlewords
The scenario involves a notional Boeing engineer from 1965. My hands are
really tied.

Point taken on the 'grandfather' bit, though!

------
baddox
> The Concorde entered commercial service and safely ferried douchebags across
> the Atlantic for 25 years.

Why this? I mean that seriously. Forget that it's unnecessary. I don't even
get the stereotype being referenced.

~~~
ajdlinux
Because a seat on Concorde cost significantly more than a _regular first-class
seat_ on a transoceanic flight. If you had enough money to afford it...

~~~
jkoschei
...you must be a terrible person?

The Concorde/douchebag line was a good segue into the logical inconsistencies
of the second part of this talk.

~~~
pessimizer
> ...you must be a terrible person?

Yes. You may not share the opinion, but surely you understand the reasoning.

------
zubairq
I agree with this article. Old things are sometimes the best:

[http://coils.cc/coils/old-things.html](http://coils.cc/coils/old-things.html)

------
Jean-Philipe
Well written, fun to read. Thanks!

------
nulltype
If you like this dude (or maybe if you don't like him) you can help send him
to Antarctica: [https://www.kickstarter.com/projects/431908798/send-idle-
wor...](https://www.kickstarter.com/projects/431908798/send-idle-words-to-
antarctica)

~~~
codinghorror
Yes please he is one of the best writers on the web in my estimation. Plus, he
runs pinboard. I backed it, you should too!

~~~
look_lookatme
I don't back Kickstarters that often but in this case idlewords is getting 200
bucks from me.

------
mozumder
I love this so much. So glad someone wrote this!

I really can't believe how "into" tech people are. It's just a boring tool for
your life, not an end-goal in itself.

BTW I feel the only leading company that is aligned with this vision is Apple.

~~~
_mgr
"BTW I feel the only leading company that is aligned with this vision is
Apple."

I wanted to say something to this effect but the last 12-18 months has shown
that Apple is losing its way also.

~~~
marincounty
Yea, I'm beginning to think Steve Jobs was Apple? Apple needs a chief who
lives gadgets, demands perfection, and most importantly grew up poor, or
middle class. I don't think there's too many Steve Jobs out there? You can't
produce another Steve Jobs with a good resume, or education. I think that
rebellion in him, maybe because he lived through the 60's, is missing in many
CEO's? He was really one of a kind! I always thought I wouldn't want him as a
friend, but I would want him running my company. (I'm not condoning angry,
perfectionists who happen to be in power--you people are not not Steve Jobs--
no matter how hard you try.) In my life, I have only seen one Steve! RIP!

~~~
tim333
Quite a lot was Jony Ive. I'm optimistic the good design will go on.

------
ucaetano
"What is the web actually for?"

I'm surprised he left out "porn"...

~~~
marknutter
I think "cats" in this context refers to all the stuff on the web that doesn't
seem important, but that people actually like to use the web for; including
pornography.

~~~
TeMPOraL
I think a good way to drive a joke home would be to say that "web is for
pussies" while displaying photo of a pussycat.

------
MichaelCrawford
I often ask other coders to contemplate "What software that we write today
will still be in use 10,000 years from now?"

That I am unable to answer that question is a primary reason I write English
text rather than code the last ten years.

~~~
sangnoir
Your English text (and English in general, as we know it) will not be in use
in 10,000 years.

There seems to be a widespread, almost instinctive desire to achieve
permanence (works or monuments) in humans, which I do not fully understand.
People want to make a dent on the universe, to be 'remembered as heroes' etc.
_Nothing_ lasts forever, not your works, not your memory. All will be lost in
the mists of time.

I don't see this as a reason for nihilism either, you can make a difference
with your efforts - it just won't last forever (or even 1,000 years, let alone
10,000). That's ok.

~~~
coldtea
> _I don 't see this as a reason for nihilism either, you can make a
> difference with your efforts - it just won't last forever (or even 1,000
> years, let alone 10,000)_

Urgghh, the inventor of fire disagrees...

~~~
tonyedgecombe
What was his name?

~~~
coldtea
I said it: Urgghh.

------
Cerium
As soon as I saw the simple design I was hoping that the design would
'improve' as I scrolled down the page.

~~~
nulltype
But instead the words just blew your mind as you scrolled down the page.

~~~
tudorw
I didn't even need to look at the pictures :)

~~~
dredmorbius
On mobile, I couldn't (without side-scrolling).

But yeah, the writing is great.

