
The Fifth Generation Computing Project - o_nate
https://scottlocklin.wordpress.com/2019/07/25/the-fifth-generation-computing-project/
======
gumby
This entertaining essay captures things well without getting distracted by
some of the really funny stories, many of which cannot be repeated as the
protagonists are still alive.

Programming in Japan is typically a low-status job considered a blue collar
occupation (Nintendo and SCEI are notable exceptions) despite Japan having
produced quite a few globally notable computer scientists. I remember visiting
a 5 Gen project om 1984. I was shocked by the open plan office (how did anyone
get any work done?). At the row of desks was one ASCII terminal for each two
desks (to a KL-20 I assume but can no longer remember). I spoke with one of
the developers asking him what he thought of the environment. "Great: I only
have to share the terminal with one other person".

The next year I myself was employed at MCC in their AI group (they had a
database and CAD group, and perhaps one other) working for Doug Lenat and
designing the Cyc system (when I got there they were working in Interlisp on
Dandelions, not surprisingly as Doug's office had been around the corner from
mine at PARC). The first thing I did was toss the Interlisp implementation and
redo it on for a 3600.

5Gen and MCC actually were 2 and 3 of another transformational hype effort,
the Center Mondial d'Informatique and Ressource Humaine (World Center for
Computer Sciences and Human Resources) launched in France in 1983 on the back
of an influential book by a major French intellectual, Jean-Jaques Servan-
Schreiber. They had a KL-20, a Vax (with a copy of BSD I carried over there in
my luggage) and a few CADR Lispms (two or four, I can't remember). With swanky
offices in central Paris around the corner from the presidential palace and a
lobby full of Apple 2s that anyone could walk in and try out they didn't do
much for the third world I'm afraid, idealistic goals notwithstanding, but
they did launch quite a few very good computer scientists, so were probably
worth the effort.

So I worked for CMIRH and MCC, and spent some time at 5Gen. Fun times.

~~~
kragen
A funny thing is that both MCC and the Fifth Generation Project were fairly
unsuccessful at actually launching the future. My hypothesis is that it's
because they were trying to plan research, which is to say, come up with a
plan up front for what they were going to do, and then carry it out, rather
than improvising based on what was learned in the process. I suspect the
massive amount of planning and investment required by modern drug approval is
a major cause of the drug industry's dramatic lack of progress in recent
decades. Or, as the Goblin Queen put it in her comment below, when industry
takes over science, we lose the science.

The 1980s computer systems research projects I think of as the most successful
were dramatic advances in semiconductor fabrication, TCP/IP, microcomputers,
RISC, Macintosh, Emacs, Postgres, HDLs, GCC, SGI, MOSIS, SUN, TeX, Perl, Plan
9, Cedar, generational GC, OOP, NASDAQ, Pixar/CGI movies, and wavelength
division multiplexing. (Maybe my vision is a bit limited and provincial, and
I'd be interested to hear what others think.) A lot of these were government-
funded, but not with top-down goals. (It might sound ridiculous, but I suspect
Perl may have been the most important advance to come out of JPL in the
1980s.). Others were industry-funded. Two were mostly funded by one stubborn
idealist, at least until you got involved.

As someone who's worked both on very successful research projects (BSD and
GCC) and unsuccessful ones (Cyc) as well as biomedical research, what do you
think about what makes the difference? Was it predictable in 1984 that Japan
would fail?

~~~
gumby
It seemed likely to me that Japan and in particular MITI couldn't continue on
as they had been but I was an outlier and so thought I was probably wrong.
Don't forget the US was just emerging from the stagflation of the 70s and
while they have addressed that one macroeconomic phenomenon haven't made any
progress on any of the things that lead to it.

So having said that, 5Gen was a "moonshot" from a bunch of bureaucrats who
could see the writing on the wall, but mistakenly let it get hyped. Compare
that to ARPA who used to place a whole bunch of batshit crazy bets that were
clearly nonsensical though being done by legitimately smart people: that gave
us the Internet, among other things. They also funded infrastructure to get
there (e.g. MOSIS).

Since then (the now named) DARPA and corporate labs have lost interest in much
blue sky work. "I don't agree that "when industry takes over science, we lose
the science." Corp research gave us the transistor, semiconductor, SEM, WIMP
interface etc. It could again. Pharma is one of the few fields where
fundamental research is still done.

(I wouldn't say regulatory approval has had much of any influence in the cost
or output of pharma research BTW; getting meaningful results is _hard_ and if
you want drugs that are better than the standard of care you need to do that
work. I do think the regulatory framework around marketing and reimbursement
has had a much larger impact on which API candidates get advanced into
programs.

I'm biased in what I consider fundamental; I'd agree with you on TCP/IP, mpus,
RISC, the Mac, generational GC (another corp research result BTW) and fab.
(WDM is another corp invention about 100 years old). Some of the others I
consider less fundamental or less influential (Cedar? CLU was far more
influential even if no real programs were written in it).

The fact is research is hard, and skills in basic research and the skills of
translational research are rarely present in the same person.

~~~
kragen
Thank you very much!

Do you think the ARPA approach is better, or better for certain cases? The
accounts I've read of the invention of WIMP and transistors resemble “bunch of
batshit bets by smart people”, if I have the story right, but I don't know
anything about the SEM and semiconductors. (Weren't semiconductors Bose in the
1890s?)

A lot of my list clearly was pretty applied; I didn't mean to suggest they
were all basic.

Cedar in particular I mentioned because it inspired Oberon, Java (né Oak), in
some sense Xi (and ropes in lots of other systems), and maybe sam and the
plumber, though maybe that was convergent Blit evolution. But certainly that
impact is small compared to the LSP. (Were Smalltalk iterator methods
convergent evolution or were they taken from CLU?)

I'm very surprised at your view on pharma. Sounds like my opinions need to
change.

~~~
gumby
Well, life is a huge parallel process that requires a diversity of inputs, so
you need blue sky, "stupid" stuff, boring spadework and directed development,
all at once. (and those definitions change; as fields go through their "stamp
collecting" modes, as taxonomy/biology did in the 18th and 19th centuries,
_that_ was exciting. High energy physics is stuck there at the moment, again).

So yes, we do need someone like the ARPA of old. Licklider, Bob Taylor, and
the like are heroes in my opinion and I don't really see their like today.

------
foobiekr
My first degree was in Japanese, mostly because I already knew how to program
and had, since I was a home computer kid, been doing so for years. I did a
Japanese major because Japan looked like it was going to take over the world
in 1989, especially in technology. It was a foolish, foolish decision.

However, it gave me a ton of exposure to the 5th generation project, and this
article is a wonderful reminder of how crazy it was, though it doesn't quite
capture the insane hype around the whole thing.

There's an excellent, if imperfect, book called "The Fifth Generation Fallacy"
by JM Unger (who, in a weird circumstance that makes me think the writers of
our narrative are just plain lazy, a few years later, became the head of the
Japanese department I was in) that talks about the project in terms that I
think make quite a lot of sense.

The book talks about it, but I go even further: the 5th gen project was about
Japanese beliefs about information technology and specifically that Japanese
was such a complex language that the only way to have working IT was to have
an intelligent machine.

Japan was still using pencils and xerox machines in the aftermath of the PC
revolution in the US and Europe (in practice, this continued into the
mid-90s), and had missed mainframes and minis entirely. The idea of a
statistically-driven, convenient, modern input mechanism (what people use now
when they use an IME, these started to show up in the mid-90s when enough of a
corpus was analyzed to make them work) was not there yet; Japanese word
processors of that era are incredibly cumbersome and terrible, just barely
usable and not usable at all by large swaths of the work force. Japan was
absolutely terrified of the technological progress in the rest of the world
and a lot of it had (and has) to do with the simple inconvenience of the
writing systems and the linguistic isolation.

In the cultural context, especially of the bubble that formed at the same
time, and no small amount of vaguely racist beliefs about their superiority
(especially linguistic, which had been going on for a very long time and
played a role in Japanese relations and eventual behavior post-annexation of
Korea), the 5th gen project makes perfect sense.

~~~
wrp
Unger's book doesn't get the attention it deserves, maybe because he is a
humanities guy. I just wanted to add that response to his book was
(predictably) very hostile, much like with Hubert Dreyfus' _What Computers Can
't Do_. Unger offered well-reasoned criticism early in the hype cycle, but it
seems to have had no influence on how people responded to the FGCP.

------
YeGoblynQueenne
>> If you want to read about the actual tech they were harping as bringing MUH
SWINGULARITY, go read Norvig’s AIMA book.

This was still the main AI textbook in 2014 when I did my MSc in the subject.
It is what is rightly called a "seminal text" and there is not a single trace
of Singularitarianism in it. It's not just "Norvig's book". Its authors are
Peter Norvig and Stuart Russel. Is Stuart Russel ignored because he is not at
Google?

The article really needs to cool down on the hindsight triumphalism. A lot of
people were very optimistic about AI in the '80s (as earlier) because
significant and continued progress had been made. Scientific progress, the
continuation of the work of the Logicians in the '20s and Church and Turing's
work in the '30s. Sure, people in the industry just wanted to make a quick
buck, as they always do. But interesting science was being done and it is a
big loss that the AI winter disrupted its course. Not because it could have
led to autonomous vehicles and conversational agents. What we lost in the
winter was the opportunity to discover new knowledge. That is a tragedy.

So I really don't understand where all the schadenfreude comes from, in the
article. A big commercial project failed and took with it a whole branch of
computer science. Who, exactly, benefited from this?

~~~
gumby
> So I really don't understand where all the schadenfreude comes from, in the
> article. A big commercial project failed and took with it a whole branch of
> computer science. Who, exactly, benefited from this?

It wasn't a big commercial system; it was a big _government_ system (you
probably can't imagine the degree of shock and awe that greeted MITI people or
even the mention of the minstry in the US). This was when Japan was going to
destroy America and goodamnit, Something Must Be Done.

Of course as happens with such projects it all comes tumbling down, but
usually with a whimper. Apollo is another good example: they hit their goal
and then what? 5Gen was the more common case: just wasted away.

The schadenfreude comes not just from the central planning hubris but from the
American freak-out response to what was really hype and a squib. The author
says the same thing is happening today, with the hype-o-meter on 11, again in
the AI field, and again with an "asian menace" on the horizon, this time
China.

5Gen just rode a hype machine though; it didn't take the field out -- we could
all see that coming anyway.

~~~
YeGoblynQueenne
>> It wasn't a big commercial system; it was a big government system (you
probably can't imagine the degree of shock and awe that greeted MITI people or
even the mention of the minstry in the US).

Sloppy turn of phrase on my part.

I do understand the reaction of the US, and not only. I did my CS degree in
2007 and I made heavy use of my university's library that had a solid
selection of Prolog textbooks, which I devoured. At some point I noticed that
a few of those textbooks (the more industry-oriented) had prefaces that stated
Prolog was a very important language to know because it was poised for world
domination thanks to the FGCP. Curious, I dug a bit further and found that
book- the one from that American journalist woman and the man from the
semiconductors business. If memory serves, it portended doom unless the US
spent a few billions overtaking the Japanese. Later, I found out what happened
after- but I got a pretty good idea of how people in industry in the US saw
the whole thing, and it has helped me understand why logic programming was
relegated to a mere niche in CS following the abandonment of the FGCP.

The similarities with the present hype cycle are not lost on me, either.
However, this is what happens when industry takes over science. We lose the
science. There is nothing here to celebrate, or laugh at. Like I say above,
missing an opportunity to discover new knowledge because some idiots want to
make a quick buck is a tragedy.

~~~
kragen
I wonder if the hangover from this is behind Will Byrd’s difficulties in
getting tenure despite his truly groundbreaking work with miniKANREN. Maybe
people just identify KANREN with Prolog and dismiss it?

(I know it's bad form to complain about downvotes, but this one is really
puzzling me. Does someone think Will Byrd doesn't deserve tenure? How could
anyone think that?)

~~~
nitrogen
Is there a good summary somewhere of what miniKANREN is and of the
difficulties finding academic support for those of us who work in industry and
don't know as much about the tenure system?

(FWIW I tried to help correct the downvotes. I personally believe very very
few comments should be in the gray.)

~~~
kragen
As for what miniKanren is, there are a couple of books (one being Byrd's
dissertation) and a lot of papers. I think maybe the best summary I've read is
actually the interview transcript at [https://www.infoq.com/interviews/byrd-
relational-programming...](https://www.infoq.com/interviews/byrd-relational-
programming-minikanren/). But also, there's a summary on
[http://minikanren.org/](http://minikanren.org/), with links to the papers and
talks, and an older summary on
[https://kanren.sf.net/](https://kanren.sf.net/). Most of these are a bit hard
to understand the significance of if you aren't already familiar with logic
programming. The talk on
[https://www.youtube.com/watch?v=OyfBQmvr2Hc](https://www.youtube.com/watch?v=OyfBQmvr2Hc)
is a popular description of one of the most astonishing results; I haven't
watched it but people tell me it's good.

As for understanding academia, yeah, I don't know.

------
ggm
"I had friends on that deathstar" basically: I was working in the UK JANET
community, with people I knew doing Alvey project funded 5th Generation work.
The projects had a very strong "catch up or we will loose the war" feel to
them, because the Japanese were pumping money into things which made ICL,
Olivetti,Ci-Honeywell-Bull,Ferranti,GEC very nervous. Domestic-Strategic
outcomes vested in having strong world-comparable computing manufacturing:
What if they "out thought us"

The one I can remember (from 40 years distance) is the GEC4000 project in
Newcastle University, which included GEC-Plessy, Newcastle and York and
Glasgow University, in a joint computing project. the GEC4000 was targetting
(hah) Submarine and AWACS contexts, and this was a huge activity to get ahead
of some issues in computing for synthetic radar. (which btw, GEC had some lead
in: the work they did in the prior two generations of computing made british
radar work world-class which is in part why british materiel sells so well
into many armed forces. Also explains why Schlumberger and the like were into
the same hardware: synthetic apeture radar is the same problem as Oil
Shockwave exploration. Guess which british companies sold to Oil? Ferranti
(Argus) and GEC/Plessy)

The standing joke about the GEC system was it had a real "halt and catch fire"
instruction because thermal management on the CPU was out of hand, and it
needed constant monitoring to prevent fire risk in the machine room.

The UK also had _two_ rounds of "AI is the solution to every problem"
remembering the Whitehill report of the 1970s which preseaged many of the same
pushbacks in the 1980s and 1990s on ubiquitous AI: the proponents made far too
sweeping claims and got a rap over the knuckles about cost-vs-benefit and
sweeping up the money.

Alvey made it very hard to fund non-alvey projects. This kind of thing turns
collaborative science into competitive science with toxic qualities. Overall,
I judge Alvey harshly.

BTW I was a very very junior CompSci research programmer: If this stuff leaked
into first-job-out-of-Uni level staff, imagine how loud it was around the
Professorial table?

------
kragen
> _somehow they thought optical databases would allow for image search. It’s
> not clear what they had in mind here_

Oh, well, it turns out that focusing a spatially modulated light beam through
a lens gives you the Fourier transform of the original spatial modulation at
the focal plane. So if your lens has a focal length of 30 cm you can calculate
a Fourier transform of an image of whatever size (1024x1024, 2048x2048,
4096x4096) in 1 ns. This is about six or seven orders of magnitude faster than
you can do it on a modern CPU core, and still about three orders of magnitude
faster than you can do it on a NVIDIA Turing. It was about eleven orders of
magnitude faster than you could do it on digital hardware at the time of the
Fifth Generation Project.

But you lose the phase information (as far as I know) so you can't reverse the
process. So what's it good for?

Correlation, that's what. If you have a photo of a tank from some angle at
some angle and scale, and at some set of phase shifts it has a significant
correlation with some other image, that probably means that tank is in that
image. You can compute the correlation (disregarding phase, so it's an upper
bound rather than the actual correlation) with an electrical SLM, which is
slower than the lens but still a lot faster than a VAX. Or an i7.

So it was a plausible attack on the problem of image recognition using
massively parallel analog computation. It hasn't panned out, but we had to try
it to find that out.

------
nudpiedo
It is so easy to make fun of the failed ones and try to predict the past, but
once there were other government projects which ended up working and now we
see as obvious right just because we know how history has gone, see for
example ARPANET.

If the fifth generation would have worked in some degree now it would have
some status and if ARPANET would have failed it would have been forgotten.This
is how anglosaxon culture often works, making fun and parodies of competitor
failures, etc. In other aspects of history the same historical rhetorical
approach is often seen.

------
omarhaneef
I love history as much as the next guy, and this is exciting writing, but the
issue I have with this is that we haven’t seen the end.

That is to say, AI is an unsolved problem, and we don’t really know what the
solution will look like. We are in a (the?) golden age for statistical
learning but in the end, symbolic processing and prolog like architecture
might turn out to be the key.

I will point to someone much smarter than myself — Jerry Fodor — to make the
arguments. But basically it amounts to this: intelligence is the ability to
reason and our best model for reasoning is linguistic.

Okay, he says it much better than I do.

------
mark_l_watson
Perhaps the author has only worked on simple, predictable outcome, projects
but in my experience (personal and in generally observing our industry)
failure and restarts are often the path forward to success.

I lived through the period of AI in the 1980s, have been a vendor of expert
system tools for Lisp machines and the Apple Macintosh, and when MCC was being
founded the founder Admiral Bobby Inman gave me a tour of the place.

I also joke about the outlandish claims and thought the AI winters were well
deserved but I also believe that the state of our field and business now is
very good and those rough times in the 1980s and 1990s are what helped get us
to where we are today.

------
sgt101
>Saying you’re going to build an intelligent talking computer in 1982 or 2019
is much like saying you’re going to fly to the moon or build a web browser in
1492.

Alex and Siri do a pretty good job of talking, and are almost up to the task
of hearing. They are command and search interfaces only - the "intelligence"
in them is very limited (hardly there) but it is clear that there is a jump
between 1982 and 2019. These devices do exist and to deny their reality does
cast a lot of doubt on the other perspectives presented.

~~~
scottlocklin
Well, you should go read the original 5th gen specs; they're all on file
sharing. They were not talking about Siri (which is a terrible example, and
something that was basically possible even way back in the 80s using vector
quantizers and HMMs). They were talking about Star Trek or Hal-9000 type
computers that could understand meaning and do human-like abstract design work
beyond "hey google spyware, please turn on the air conditioning."

------
uxp100
In japanese pop culture of the 80s and 90s, you see references to 5th gen
computing now and then. Just like as a part of the technobabble in some sci-fi
show. Watching sci-fi anime once I heard it a few times I started to suspect
it might be a real thing, and looked it up.

If anyone has a recommendation for a book or long article that covers the
this, some of the people involved, what they actually achieved, etc. I'd be
interested.

~~~
LargoLasskhyfv
Look at this
[http://xahlee.info/kbd/TRON_keyboard.html](http://xahlee.info/kbd/TRON_keyboard.html)

Then
[https://en.wikipedia.org/wiki/TRON_project](https://en.wikipedia.org/wiki/TRON_project)
and especially
[https://en.wikipedia.org/wiki/BTRON](https://en.wikipedia.org/wiki/BTRON)

If that is not enough, try [http://tronweb.super-
nova.co.jp/homepage.html](http://tronweb.super-nova.co.jp/homepage.html) which
is the most comprehensive history of that i can find.

In my opinion that is the stuff which influenced sci-fi anime.

Imagine... A specification exactly describing a common platform with different
levels of capabilites, but not down to the bits, but more like an API, where
anybody is free to implement that in whatever way one sees fit, as long as it
conforms to the API. Mandatory interoperability between products of different
vendors, be it hard- or software. FAST! All networked. All with code at least
a magnitude smaller than contemporary counterparts.

So no BLOAT.

That was the spirit of the time there, i guess.

But the powers of Babylon intervened again, i guess.

