Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
'Fifth Generation' Became Japan's Lost Generation (1992) (nytimes.com)
106 points by luu on March 20, 2018 | hide | past | favorite | 47 comments


In 1984 I visited NTT's 5th generation effort (and later worked at MCC, featured prominently in the article).

At the time I had a bitmapped display on my desk which I connected to a dedicated Lisp machine (Dorado). At the 5Gen office there was a row of desks with shared character terminals (one for every two developers) connected to the workhorse PDP-10 (same machine we used for research in the USA). The guy I was visiting was glad he only had to share the terminal with one other person. It was my first introduction to the blue collar status of programmers in Japan -- but without the respect that, say, a machinist gets in the US. It seemed pretty unlikely they'd have the opportunity to innovate.

The MCC response wasn't much better (perhaps the nearby Sematch did better? I don't know). It was also stupid and bureaucratic and although they did recruit a bunch of very smart folks from US and Mexican universities, I never saw much impact either (though I had two symbolics 3600s terminals on my desk). I don't think it's a government issue -- these multi-company collaborations (Taligent, Itanium, even Power) rarely come to much.

Of course companies can work together -- standards organizations and joint development projects are common. But IMHO a diffuse "consortium" really has no reason to exist beyond its own existence.


"these multi-company collaborations (Taligent, Itanium, even Power) rarely come to much."

That is a good point. A good example that's happening right now is https://www.roomkey.com/

If you read this you will be amazed by the technology that RoomKey developed:

"Against the Grain: How We Built the Next Generation Online Travel Agency using Amazon, Clojure, and a Comically Small Team"

http://www.colinsteele.org/post/23103789647/against-the-grai...

(Hotelicopter eventually rebranded as RoomKey)

The tech is amazing, but this is a company owned by a consortium of big hotels (Marriot, Sheraton, etc) and they have no interest in feeding a disruptive startup that threatens their business. Rather, they simply want RoomKey to exist so they can use it as leverage in negotiations with Expedia and Orbitz and maybe Kayak. In other words, they want to be able to say "Give us a better percentage of revenue or we will fully fund RoomKey and you will be sorry."

This truth is obvious to the tech at programmer, and they find it immensely frustrating.

It's the mis-aligned incentives that lead to the poisonous politics of these multi-company joint efforts.


Perhaps I miss something for cultural reasons (I am not a US citizen nor do I live in the USA):

> The tech is amazing, but this is a company owned by a consortium of big hotels (Marriot, Sheraton, etc) and they have no interest in feeding a disruptive startup that threatens their business. Rather, they simply want RoomKey to exist so they can use it as leverage in negotiations with Expedia and Orbitz and maybe Kayak. In other words, they want to be able to say "Give us a better percentage of revenue or we will fully fund RoomKey and you will be sorry."

> This truth is obvious to the tech at programmer, and they find it immensely frustrating.

Why is this so frustrating to the programmers working there? They can work with interesting technology (e.g. Clojure). Yes, because of the incentives it will probably not become a really huge company, but again: where is the problem if they earn a good salary?


Because technology is the means, not the ends. Many of us work with tech not because we're fetishists for proximity or status of technology, but because we recognise that the technology is a means to actually do something meaningful in the world.

Have no meaning in the work that we do, and job-wise we'll be as happy working with tech as we were flipping burgers at McDonald's: pay and conditions aside obviously.


> Have no meaning in the work that we do

To create a website that serves as an important bollock against other travel websites that endanger the company that pays you is not what I would call meaningless.


Because you want achieve positive things in your life's work, not pass time.


Boring repetitive project in closure is still boring.


How is this not considered anti-competitive behavior?


Well, it would be competitive for Marriot or Sheraton to launch a new company to compete with Expedia or Orbitz. It is somewhat less competitive for them to buy a startup that is already doing that, but if they gave the startup a lot of money, so it could better compete with giants, then that would help increase competition. But to buy a startup only because you want some leverage against Expedia and Orbitz, and to let that startup die, is unfortunate in several ways. It wastes the efforts of a lot of good people, and it allows the total amount of competition in the system to decrease.


I wonder whether the MCC had ever a plan to move things they developed into 'production'. From the Lisp side this was things like Cyc, the UIMS they developed, the CAD system and the object database (Orion). The object database appeared later as a product.

https://www.uni-ulm.de/~sbauer/programming/OOinfo/FAQ/oo-faq...


I was just wondering: How does the 5th generation project compare to IBM Watson? (both are expert systems/rule based systems) Was it possible to pull off a system like that on 80ies minicomputers? I suspect that the PDP 10 too limited for the amount of data processing required. Maybe one could have done that with rule based systems on a mainframe of the day.


Back in about 1988, Bruce Porter, one of my AI professors, described the problem statement of the Fifth Generation project as, "a list of important research topics like natural language understanding, speech recognition, and so forth, and the statement, 'Therefore we will build a parallel Prolog machine.'" He wasn't a fan.

On the other hand, according to the tech press at the time, the Japanese were going to eat our lunch. Put us all out of work. Rule the world with an iron fist.


A Parallel prolog machine is a great idea. Just because it failed to take off doesn't mean the idea shouldn't be explored again. The difference in CPU design in 88 and today is amazing. Prolog is one of those languages that just automatically lends itself well to AND/OR conditions without having to change any code.

I personally still see a future there, Von Neumaan has only taken us so far. We need alternate CPU designs.


The 5th generation project found that Prolog isn't very parallelizable and thus they created a language called KL1 which was more parallelizable than Prolog but not as good in other ways.


There were a few nearly identical parallelizations of Prolog. Besides KL1, there were Concurrent Prolog, GHC, Paralog, Strand, and Aldwych. The structure of Prolog-like concurrent systems is like the Actor model, based on message passing, and shares the problem of indeterminacy in the ordering of messages. These systems had to drop backtracking, because that requires access to a global stack.

In the end, it proved difficult to implement clause invocation in these languages as efficiently as procedure invocation in object-oriented programming languages. The combination of efficient inheritance-based procedure invocation together with class libraries and browsers was better than the slower pattern-directed clause invocation of the FGCS programming languages, so concurrent object-oriented message-passing languages like Java and C# became the mainstream.

I would suggest that the relational database approach is simpler and about as powerful as the rule-based approach to knowledge processing. From a practical point of view, any row of a Relatonal DataBase System table can be considered as a rule, provided that at least one attribute has been selected as an output.


I agree, along with a huge range of parallel machines.

Parallel machines weren't a great idea in the early 90s, because you could barely get one decent CPU on a chip and adding memory turned it into a large board per core. Now you can have satisfying numbers of CPUs plus memory on a single chip, and even a useful prototype on an FPGA.


> On the other hand, according to the tech press at the time, the Japanese were going to eat our lunch. Put us all out of work. Rule the world with an iron fist.

Sound familiar? Thankfully history doesn’t repeat itself with another Asian country going all in on AI with a scary huge real estate bubble.


Drawing trite parallels is very ha-ha, but we have to remember that while Japan only has half the population of the US, China’s population is much more. It’s a little suspicious arguing that one Japanese person is somehow destined to be more productive and inventive than two American people; it is even more suspicious arguing the same for one American and five Chinese.


The parallels are downright freaky. Think about it:

* Sense that they will rule the world

* Incredibly bubbly real estate market

* Exporting this bubble to North America west coast

* Huge bets on AI

* Zombie companies (SOEs)

* Rapidly aging worforce

Yes, there are some difference, China is much larger than Japan but it is also much poorer than Japan (on a per capita basis). The country as a whole isn't that well educated (compulsory education only goes to 9th grade, extremely uneven resources applied to rural areas and different cities).

Who knows how this will play off, I'm guessing a China crash will be much worse in scale than Japan coming down in the late 80s/early 90s, but China will also be able to recover much more quickly after it happens given there is still room for development.


> one American and five Chinese

How many Chinese are meaningfully in the present? As I understood it, there was still a non-trivial population living in a less-than-modern way.


interesting question! and how many in the US are in the present? and just _when_ in the world is silicon valley - or tokyo?


I mean, straight up subsistence farming populations. Not "building Facebook for cats."


While I don't have solid economic numbers (good luck with that in China), if I had a to guess it would make that ratio still one American to two Chinese.

Here's a model that I'll use to back up that idea:

Top 5 US CSAs: NY, LA, Chicago, DC, SF Bay Area: 70.8 million people.

Top 5 Chinese Metro Areas: Guangzhou, Shanghai, Chongqing, Beijing, Hangzhou: 149 million.


Are you including the rural populations of each city? Chongqing, for example, is actually fairly rural, it’s basically what we would call a state.

Chinese municipalities are complicated. Even Beijing is about half rural.


Oh yeah I'm not saying the model is great, but if someone wants to discount the 1 to 4.3 ratio, that's the estimate I'd throw out there.

And if we take in the half rural estimate, that'd already put it at 1 to 1 today (with lots of room for growth).

Agreed that Chongqing is fairly different (since it's forced "urbanization" due to the Three Gorges Dam). But I've heard a lot of people say Beijing and Shanghai make NYC look "small" in comparison.

Even LA and NY have lots of parts that many Americans may not consider as living "meaningfully in the present" from the point of view being able to work on advanced projects like "Fifth Generation" computing.


> Even LA and NY have lots of parts that many Americans may not consider as living "meaningfully in the present" from the point of view being able to work on advanced projects like "Fifth Generation" computing.

I could have put it clearer, but my understanding was that China still has a non-trivial population that is not involved in the "advanced economy," in a way that would surprise most Americans. That is, subsistence farming, less-than high school education, etc.


It could also go the other way. There is no way to know it. If you can learn one thing from history it's that empires never last as dominant as they may seem at the time.


I confess that I too fell for the 5th gen hype and spent a couple years diving deep into Prolog in the mid 80s. It wasn't a total waste however, I still find my knowledge of Prolog coming in handy. Logic programming is an interesting concept, but not in the way they envisaged it in the 80s. In particular, I am fascinated by answer set programming. There is no need to shoe-hard Turing completeness into logic programming.


>> He wasn't a fan.

Given the historical circumstances, I'll be he would prefer they were trying to build a parallel Lisp machine.


If I remember correctly, we did have a lab full of TI Lisp machines at the time. Never used them myself, though.

The interesting point, though, was that the specific goals were not clearly related to the fundamental reasons for doing it. And as it turned out, most of the research topics (but not all, IIRC) were really boosted by work in graphics co-processors.


My understanding was that, at the time, Lisp and Prolog were the two "languages of AI", with Lisp favoured in the US and Prolog in Europe and Japan. Knowing how language arguments go, I assume that an AI professor who didn't like the idea the Japanese were going for Prolog would be naturally preferring Lisp. I might be wrong :)

As to the 5th generation computer project, what little I've read so far has more or less convinced me that special-purpose processors will never be better than general-purpose ones, at least not for general-purpose computing (which, ultimately, was the aim of the project).

I'm saying that as a dyed-in-the-wool symbolist btw (and a long-time Prolog user). Special-purpose processors to make my Prolog go faster is something I'd love to get my hands on- but, for most people? I think it's not even number 1000 in their list of priorities ot improve computing performance.


Basically most AI/Lisp vendors also offered a Prolog in Lisp. As a Prolog implementation the standalone Prolog compilers were faster and had more Prolog features, but since the Lisp tools were a bit more mature, even in Japan some projects used Prolog in Lisp.

There was a lot of research for AI hardware in the 80s and also for Prolog hardware. I once saw a Japanese Prolog Machine (a special purpose workstation with a Prolog environment), but I didn't have the chance to actually use it.

With faster/cheaper Intel or RISC processors and a lack of research funding, the topic of specialized hardware was mostly ending in the early 90s.


I was at Computer Corporation of America for a few years, until 1988, in the research group. It was quite illustrious in its day, producing very interesting research and prototypes, (concurrency control, extended relational databases, the Adaplex project). We were aware of both the 5th Generation project, and the US "competitor", MCC.

Remember, at the time, Japan was an economic and technical powerhouse, There were concerns about the Japanese buying up valuable real estate in New York. We were worried about the trade imbalance. So 5GP was quite threatening to some.

I remember meeting one of the 5GP researchers who credited MCC for scaring the Japanese into greatly increasing funding for computer science in his country. And one of the other comments here pointed out that the fear worked in the other direction too.

Another commenter here talked about Prolog. One of the lines of research at the time was on "expert systems", and if I remember correctly, Prolog was one of the major languages for research in that area. An expert system was a set of rules for solving a problem in some domain. It was yet another failed attempt at AI, which didn't really work beyond toy problems. (Of course, now, those rules come out of machine learning systems, but the rules are kind of inscrutable.)


At the time, there were a lot of voices pushing the idea that public-private partnerships in the Japanese MITI vein were the future. Economists like Lester Thurow vehemently argued that the US wasn't doing nearly enough in this vein and was going to lose out to Japan as a result.

The history of AI research is sort of messy but around the end of the 1980s you were entering one of the AI winters. As you say, a lot of money had been pumped into expert systems without a lot of commercial success (and some high-profile failures). The Lisp machine market was collapsing. It probably didn't help that a lot of the big computer companies at the time (the Route 128 minicomputer makers) were collectively struggling.


The research arm of CCA disappeared in 1988, at the same time that Symbolics (and other Lisp machine companies) was failing. So I, along with a few Symbolics people, were looking for the next thing to do, as OO databases were becoming a thing. We were brought together by an entrepreneur and formed Object Design, one of the batch of OO database companies of the late 80s/early 90s. Those Symbolics guys were absolutely brilliant.


I remember ObjectStore. Tried to use it, but it didn't work that well for our use case; we were better off doing serialization. Very interesting design, though.


Notable is also that Lucid's Energize C++ programming environment used Objectstore, too. Lucid was known for its excellent Common Lisp implementation.



Moto-oka Tohru and Hideo Aiso often said privately that an unstated goal in the Japanese Fifth Generation project was the creation of a practical way to keyboard written Japanese and that with regard to that issue it was an unqualified success.


Linguist James Marshall Unger in his book, The Fifth Generation Fallacy, argued that the primary motivation for FGCS reasearch was to develop NLP for the Japanese writing system. He claimed that the project was doomed to failure because the high level of ambiguity in the Japanese writing system required contextual understanding for even simple processing tasks. In that respect, things went as he predicted. Do Japanese IMEs like ATOK have any heritage in FGCS work?


This reminded me of the $560MM "AI Fund" South Korea touted after the AlphaGo-Sedol Go competition [0]. Looking for news now, it seems like they teamed up with Singapore and are taking a less centralized approach. From [1]:

Gwangju Metropolitan City (GMC) plans to set aside US$920 million (1 trillion won) to support the AI Town Project, which is expected to incubate 1,000 AI-oriented startups and educate and train 5,000 for the sector within the decade, starting from 2019.

The 3 sub-projects of the AI Town Project include: (1) Establishment of AI Research Institute, (2) Development of AI-oriented Campus, and (3) Creation of AI-oriented Start-up Ecosystem.

[0] https://www.nature.com/news/south-korea-trumpets-860-million... [1] https://www.opengovasia.com/articles/singapore-and-south-kor...


I've been intrigued lately by the notion of Japan's "Fifth Generation", and wonder what was the nature of the research being done, its actual results, etc. I'm also wondering if there are possible links that we could try to establish with the current AI boom. One interesting book I found about the subject is this one:

https://www.amazon.ca/Fifth-Generation-Artificial-Intelligen...

But as it's from 1983 (i.e. right in the actual period), I don't think the historical perspective would be very present. Anyone has other interesting reading suggestions?


There was no technical link -- neural nets were still unpopular (largely, if unfairly, as a result of the '67 Perceptrons book0 and most work was on symbolic systems and, in the later period, expert systems. but what passed for AI systems at the time were quite powerful, if limited, just as what are called AI systems today are quite powerful, if limited.

"Unfair" in that the book wasn't intended to destroy work on NNs, but was used to justify abandoning the field of work, until multilayer networks (discussed Perceptrons) were re"discovered" in the 90s.


There was another highly critical book from the same era that said that the Japanese were trying to use AI to adapt their more traditional writing system to computerization and it would be madness for other countries to throw money at AI in the 80's just because the Japanese were. I'm trying to find a reference now if anyone can help.

Edit: https://www.amazon.com/Fifth-Generation-Fallacy-Artificial-I...


Never i have been able to find a pdf of Fifth Generation Computer Systems ’92 which is the report of the last FGCS conference that happened in 1992. https://books.google.com/books?id=hDeiTuxLU7YC

Here a summary of what was expected from FGCS:

What is required of the 5th generation computer - social needs and its impact (1982) https://imgur.com/a/HPYCL


> these days, few people want specialized computers for artificial intelligence, preferring powerful general-purpose machines like those made by Sun Microsystems Inc., a fast-growing Silicon Valley company.

This is a starking example of how tech cycles change over time. The period in which the article was written was during an AI Winter (in fact, the Fifth Generation program is listed as a notable example[1] in the Wikipedia entry for "AI Winter") and Sun Microsystems was still in its prime at the time (work on the yet to be released Java language had started just 12 months[2] before).

[1] https://en.wikipedia.org/wiki/AI_winter#The_fizzle_of_the_fi...

[2] https://en.wikipedia.org/wiki/Java_(programming_language)#Hi...


Specialized hardware/generalized hardware is one of the classic 'wheels of reincarnation' in computer tech: http://www.cap-lore.com/Hardware/Wheel.html

But looking back over the last 20 years, at least for the silicon paradigm, I think we may finally be in the last turn of the wheel. The trend back towards specialized hardware (for audio, video, encryption, NN, servers vs smartphones) seems fairly inexorable. There's no CPU architecture remotely on the horizon which is general and blows away everything else for flexibility and speed. Even specialized hardware like GPUs are starting to see their lunch eaten by ultra-specialized ASICs and TPUs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: