
The dark ages of AI: A panel discussion at AAAI-84 (1985) - 1e
https://www.researchgate.net/publication/220604602_The_dark_ages_of_AI_A_panel_discussion_at_AAAI-84
======
Animats
_" To sketch a worst case scenario, suppose that five years from now (from
1985) the strategic computing initiative collapses miserably as autonomous
vehicles fail to roll. The fifth generation turns out not to go anywhere, and
the Japanese government immediately gets out of computing. Every startup
company fails. Texas Instruments and Schlumberger and all other companies lose
interest."_

All of which happened. That was the "AI Winter".

The "Fifth Generation" was an initiative by the Ministry of International
Trade and Industry in Japan to develop a new generation of computers intended
to run Prolog. Yes, really.[1]

The "Strategic Computing Initiative" was a DARPA-funded push on AI in the
1980s. DARPA pulled the plug in 1987.

I got an MSCS from Stanford in 1985. Many of the AI faculty from that period
were in deep denial about this. I could see that expert systems were way
overrated. I'd done previous work with automatic theorem proving, and was
painfully aware of how brittle inference systems are.

Each round of AI has been like that. Good idea, claims that strong AI is just
around the corner, good idea hits its limit, field stuck. I've seen four
cycles of this in my lifetime.

At least this time around, machine learning has substantial commercial
applications and generates more than enough revenue to fund itself. It's a
broadly useful technology. Expert systems were a niche. There's enough money
and enough hardware now that if someone has the next good idea, it will be
implementable. But strong AI from improvements to machine learning? Probably
not.

[1]
[https://en.wikipedia.org/wiki/Fifth_generation_computer](https://en.wikipedia.org/wiki/Fifth_generation_computer)

[2]
[https://en.wikipedia.org/wiki/Strategic_Computing_Initiative](https://en.wikipedia.org/wiki/Strategic_Computing_Initiative)

~~~
mcswell
I worked on an AI program in the 80s that is perhaps the only program from
that era that's still being used.

I got hired in 1984 into the AI group at Boeing Computer Services (neither the
AI group nor BCS exists any more, and yes, it was part of that Boeing); I was
in the Natural Language Processing group (the expert systems group was a
different set of people). I left in 1987 to do s.t. else. By that time, we had
built a syntactic parser of English that covered most everything in "standard"
English (but without most of the probabilistic apparatus that modern parsers
have). It was a solution in search of a problem.

After I left, the rest of the NLP team came up with the problem. When Boeing
builds an aircraft, they have hundreds, if not thousands, of manuals. (No
comment on the 737 MAX...) The planes are sold to airlines around the world,
many of whose employees don't understand English anywhere near as well as a
native speaker would. Boeing wanted its writers of manuals to write in
simplified English. There was (and is) a standard for that, but it was very
hard to ensure that writers conformed to it. The solution was to rip out all
the "interesting" constructions from my grammar, and retain only the
constructions and lexicon that conformed to the simplified (technical) English
standard. Then the manuals got pushed through the parser; anything that didn't
parse had to be re-written to conform to the grammar. And (I think, remember
this was after I left) if anything parsed too ambiguously, it was sent for a
rewrite as well.

And that's how a 1980s AI program is still in use today.

~~~
The_rationalist
Is it open source? Why keep it for themselves? Where you doing semantic
parsing? What parsing techniques where you using? Where you inspired by a
linguistic formal grammar in particular?

------
ProfHewitt
Because of strategic challenges, Reusable Scalable Intelligent Systems will be
developed by 2025 with the following characteristics:

• Interactively acquire information from video, Web pages, hologlasses
(electronic glasses with holographic-like overlays), online data bases,
sensors, articles, human speech and gestures, etc.

• Real-time integration of massive pervasively inconsistent information

• Self-informative in the sense of knowing its own goals, plans, history,
provenance of its information and having relevant information about its own
strengths and weaknesses.

• Close human interaction using hologlasses for secure mobile communication.

• No closed-form algorithmic solution is possible to implement the above
capabilities

• Reusable so that advances in one area can readily be used elsewhere without
having to start over from scratch.

• Scalable in all important dimensions meaning that there are no hard barriers
to continual improvement in the above areas, i.e., system performance
continually significantly improves.

A large project (analogous to Manhattan and Apollo) is required to meet
strategic challenges for Intelligent Systems and S5G (Secure 5G).

See the following for an outline of technology involved:

[https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3428114](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3428114)

~~~
yters
Why do you think intelligent systems is even feasible? Why do you assume it
will be developed? And how is this any different than today's social networks?
Either it already exists, or there is a significant AI portion that is
completely unsubstantiated at this point.

~~~
ProfHewitt
Strategic competition will be crucial because once one side shows that it is
possible others will quickly follow.

Do you see any reason that the technology outlined in the article cannot be
implemented by 2025?

~~~
yters
All the AI technology in the article is promissory technology, i.e. it doesn't
exist except by analogy to human intelligence, and specious claims it'll be
better than what has come before.

What if all AI tech that has come before sucks because there is a fundamental
limit to what algorithms can do, and that limit is much less (infinitely less)
than what human intelligence can do?

There is a fundamental assumption behind all AI research that the human mind
can be simulated by a Turing machine, and no one has verified that assumption.
AI research is just floating on the materialistic bias that the mind is
reducible to the matter in the brain. We could very well have a supernatural
soul, and thus the materialistic bias is completely wrong.

Thus, I don't see any reason the technology can be implemented by any date.

~~~
somewhereoutth
Agreed. A Turing machine operates in a discrete countable state space, whereas
the human brain requires real numbers for a complete state description. Cantor
showed (with the diagonal argument) that real numbers are uncountable - so
there are (infinitely many!) real numbers that are unreachable using a Turing
machine. My suspicion is that AGI, consciousness, perhaps even the
supernatural soul you allude to, can be found only in this unreachable state
space. There exists a Cardinality Barrier!

~~~
yters
Why would the brain require real numbers for a complete state description?
Physical reality is discrete and finite, as far as we know.

~~~
arpa
Every single neuron has many levels of excitement with different ramp-up, cool
down, refractory periods just to name a few non-discrete variables. Not
accounting for the rest of chemical soup and meyham happening in the nervous
system. Thinking that nervous system is discrete and finite is very, very
reductionist and wrong, perhaps as if to compare a novel and an alphabet ("why
would you need infinite set space to describe all the possible novels? There
are only 30 letters in the alphabet").

~~~
somewhereoutth
Exactly this. To be more precise with your analogy, given a finite alphabet
and a finite length for any novel, then in fact the set of all novels is
countable (and computable) - but when novels can have (countably) infinite
length (or are written using an infinite alphabet?), then the set of novels is
uncountable (and indeed by construction correspond to the reals, if my
understanding is correct).

~~~
yters
If the set of all novels is an infinite subset of all finite symbol strings,
it may not be computable. This is because the set of all halting programs is
not computable, even though each halting program is a finite symbol string.
So, we could have a set of novels that enumerate the halting programs (best
sellers, those!), and since this subset of novels is not computable, then the
set of all novels is not computable.

~~~
somewhereoutth
Ah - but though a subset (bestsellers/halting programs) may not be computable
(recursively enumerable) as you say, this does not preclude _all_
novels/finite programs from being enumerated. Taking the alphabet as just the
uppercase letters, I can list all novels as A, B, C.. AA, AB, AC... ... and
hence eventually reach any novel you may supply. But this tells me nothing of
whether they are bestsellers (or halting programs)!

~~~
yters
Yes, of course, enumerating all symbol strings is computable.

What I mean is the novels as a whole correctly label the halting machines, an
unreasonable assumption to be sure. That would prevent the novels from being
computable.

If real novels somehow exhibit an equivalent characteristic, then that would
prevent real novels from being computable. Perhaps the logical consistency of
good novels would be such a characteristic. We probably cannot depend on
novels being perfectly consistent, but still consistent to a greater degree
than we can achieve by randomly generating bitstrings.

The objection would be that a finite set of axioms can generate consistent
novels, so there is also some other criterion that is necessary to make the
novels uncomputable. I would say this second criterion is novelty, defined as
an increase in Kolmogorov complexity.

Using a single set of axioms to enumerate an infinite set of novels will hit a
novelty plateau, as per the proof of uncomputability of Kolmogorov complexity.

So, based on these two criteria, one can derive a hand wavy argument that
novels in general are uncomputable.

------
sysbin
I find it odd that people get most excited when thoughts of AI are aimed
towards education in the classroom as the research hinted. I've always thought
the most exciting thing about AI would be making robots that can cover the
work needed so all human beings get to focus on what's meaningful for them.
The interest as hinted from the article makes me question where peoples
priorities are when it comes to the next generation getting to live life.

~~~
whichquestion
What if the work needed is the work human beings want to focus on because it’s
meaningful to them?

When you have AIs that can do arbitrary work that humans can do, what prevents
other humans from simply cutting out the humans that want to do that work for
AI?

------
yters
What if the human mind is not computable? Why does no one test this hypothesis
instead of throwing billions of dollars and our brightest minds against an
unsubstantiated hypothesis? Why are we so unscientific in testing assumptions
when it comes to AI? It is not difficult. I've thought of tests myself. But,
the closest I've seen in academic literature is Penrose' microtubules and
silly hypercomputation. Nothing with empirical tests. I blame materialistic
bias, since if materialism is true then the human mind must be a computation.
But, materialism does not need to be true in order to have empirical tests
whether the mind is computable.

~~~
dtwest
I don't follow your logic, you're saying that if the human mind isn't
computable, all our research of AI is a waste?

What if weak AI systems are extremely useful? What if we don't need to mimic
the human mind to create intelligent systems? What if machine intelligence is
very different than human intelligence?

We may never build a replica of a human brain. I think it would be absurdly
lucky for the Turing model to be the correct one for understanding the human
mind since it was developed without much knowledge of how the human mind
works. But is that the only way for AI to be successful in your opinion? I
don't understand that perspective.

~~~
yters
The basic premise behind AI is to capture human intellectual capabilities with
computation. If that's not the goal, then it is just algorithmic research by
another name. And if we're talking spandrels that result from AI, why not look
for spandrels while researching something that is feasible?

------
gautamcgoel
One of the panelists (B. Chandrasekaran at Ohio State University) was my dad's
PhD advisor (my dad is Ashok Goel at Georgia Tech). Pretty cool to come across
his name on HN!

------
netwanderer3
AI is going to be huge no doubt. However, in my opinion there would likely be
some costly mistakes made before humans can reap its full benefits. We have
been seeing a lot of AI developments but in reality it hasn't really brought
us many meaningful changes as we had expected. In general our daily lives
still remain pretty much the same as before. Our civilization has never
experienced significant AI impacts at a large scale so mistakes may be hard to
avoid, and it will serve as lessons for later generations not to repeat those
same errors.

I have noticed human emotions and intelligence seem to be at odds with each
other. Sometimes they are even a trade-off. The increase of one may lead to
the decrease of the other. If we look around, humans today have the most
advanced technologies in history, but are our lives really better compared to
people's in the past? Materialistic wise, certainly yes because they are
products directly produced by technologies, but mentally and emotionally it
could arguably be worse.

AI and techs keep getting better and better everyday, but then human have to
work more with longer hours and higher stress. We all thought the machines are
supposed to help us human but it's actually the other way around. We work
tirelessly days and nights in order to keep making those machines better and
more advanced, but in return our lives have not seen many meaningful
improvements, and even arguably worse than before in some areas. Individually
our personal ability has limits and naturally it evolves very slowly, but the
power of AI machines is potentially unlimited and growing at an even faster
rate than Moore's law. We seem to be collectively working to make machine much
better than us while we are remaining relatively the same individually. Are
technlogies actually enslaving us?

We keep buying things that don't really serve us much. We have a lot of stuff
now but they don't mean much. If something broke, meh we will just get another
one. It's just another item and it will get shipped here tomorrow. We didn't
have as much in the past but every little thing carried much greater value.
Even the most simplest thing could fascinate and brought us joy.

We humans today already operate based on rules and algorithms dictated by the
machines. We still don't know how our brains function organically (memory,
consciousness, etc...), but in the quest of trying to make AI becoming human-
like, we have created AI neural networks to simulate our brain. The danger is
that even though we still don't know how our real brain functions, but we have
now turned around and claimed that the human brain works in a similar way
under the same principles of an AI neural network. We are enforcing AI rules
onto ourselves.

This is a dangerous assumption to make simply because AI does not have
emotions. Once we begin to operate strictly under these rules and principles
that are dictated by AI, we would soon lose the attributes and characteristics
of what made us human. Our emotional spectrum may get increasingly shorten.

TV shows and movies are an example as they are a form of story telling that
have biggest influences on us at the emotional level. It's no coincidence that
"Seinfeld" and "Friends" are still the two best tv shows today. Many movies
that are considered as best were also made from a while ago. Despite the most
advanced technologies, why is it that today we can't seem to tell stories that
bring out the same level of emotional reponse and intensity as before? They
all seem to lack the genuity and inspiration that the previous generation once
had.

Is it because AI do not understand human emotions so its algorithms cannot
accurately factor that into consideration? One can say that today humans are
the ones who write those algorithms so maybe we can add in compenents to
account for that? But just like the example above, if we don't even understand
how our brain works, how can we simulate the machine to accurately reflect us?
In the future, machines are supposed to learn and write all the codes by
itself without human intervention, what would likely happen then? Would we
still retain the ability to even understand those codes? Would it possible
that human may slowly evolve into machines? In trying to make those machines
becoming like us, we may instead become like machines.

------
codingslave
Dark ages of AI is a meme like dunning krueger effect

------
account73466
This time it is different.^{tm}

More seriously, we are almost about to

i) Generate a good book using a short intro.

ii) Generate a meaningful video using a few photos and a basic text scenario.

Which makes us closer to generate movies on demand (say in 2025) and then good
luck to people claiming that the current progress in AI is a bubble.

~~~
azinman2
We’re no where near a “good book” (we can’t even do a good 3 paragraphs
reliably), nor a “meaningful video.”

You’re confusing a 1000 monkeys (transformers) with 1 human intelligence
(AGI). That giant leap hasn’t been met.

~~~
AlexCoventry
Have you seen the tables in Appendix E (starts p. 16) of the Transformer-XL
paper? I think they're pretty good.

[https://arxiv.org/pdf/1901.02860.pdf](https://arxiv.org/pdf/1901.02860.pdf)

~~~
username90
Quote from that paper:

> The Battle of Austerlitz was the decisive French victory against Napoleon

It didn't even catch that Napoleon was the leader of the French as described
in the source snippet. And this was when it just generated text of similar
size as the input. Based on just that paper I highly doubt that this method
will generalize to creating entire books.

~~~
account73466
"he was still in conflict with Emperor Napoleon, the French Republic’s king"

~~~
username90
My point was that the text lacked coherence, that quote only makes it worse.
If it can't even keep coherence for a single page how would it manage for a
hundred?

~~~
account73466
by improving metrics over time

see, e.g.,
[https://gluebenchmark.com/leaderboard/](https://gluebenchmark.com/leaderboard/)

~~~
account73466
Oh fuck, I am banned here from getting points. Time to avoid selling my own
profile to whoever sniff on HN users.

~~~
dang
You're not banned here.

