Hacker News new | past | comments | ask | show | jobs | submit login
The dark ages of AI: A panel discussion at AAAI-84 (1985) (researchgate.net)
57 points by 1e on Dec 24, 2019 | hide | past | favorite | 67 comments



"To sketch a worst case scenario, suppose that five years from now (from 1985) the strategic computing initiative collapses miserably as autonomous vehicles fail to roll. The fifth generation turns out not to go anywhere, and the Japanese government immediately gets out of computing. Every startup company fails. Texas Instruments and Schlumberger and all other companies lose interest."

All of which happened. That was the "AI Winter".

The "Fifth Generation" was an initiative by the Ministry of International Trade and Industry in Japan to develop a new generation of computers intended to run Prolog. Yes, really.[1]

The "Strategic Computing Initiative" was a DARPA-funded push on AI in the 1980s. DARPA pulled the plug in 1987.

I got an MSCS from Stanford in 1985. Many of the AI faculty from that period were in deep denial about this. I could see that expert systems were way overrated. I'd done previous work with automatic theorem proving, and was painfully aware of how brittle inference systems are.

Each round of AI has been like that. Good idea, claims that strong AI is just around the corner, good idea hits its limit, field stuck. I've seen four cycles of this in my lifetime.

At least this time around, machine learning has substantial commercial applications and generates more than enough revenue to fund itself. It's a broadly useful technology. Expert systems were a niche. There's enough money and enough hardware now that if someone has the next good idea, it will be implementable. But strong AI from improvements to machine learning? Probably not.

[1] https://en.wikipedia.org/wiki/Fifth_generation_computer

[2] https://en.wikipedia.org/wiki/Strategic_Computing_Initiative


I worked on an AI program in the 80s that is perhaps the only program from that era that's still being used.

I got hired in 1984 into the AI group at Boeing Computer Services (neither the AI group nor BCS exists any more, and yes, it was part of that Boeing); I was in the Natural Language Processing group (the expert systems group was a different set of people). I left in 1987 to do s.t. else. By that time, we had built a syntactic parser of English that covered most everything in "standard" English (but without most of the probabilistic apparatus that modern parsers have). It was a solution in search of a problem.

After I left, the rest of the NLP team came up with the problem. When Boeing builds an aircraft, they have hundreds, if not thousands, of manuals. (No comment on the 737 MAX...) The planes are sold to airlines around the world, many of whose employees don't understand English anywhere near as well as a native speaker would. Boeing wanted its writers of manuals to write in simplified English. There was (and is) a standard for that, but it was very hard to ensure that writers conformed to it. The solution was to rip out all the "interesting" constructions from my grammar, and retain only the constructions and lexicon that conformed to the simplified (technical) English standard. Then the manuals got pushed through the parser; anything that didn't parse had to be re-written to conform to the grammar. And (I think, remember this was after I left) if anything parsed too ambiguously, it was sent for a rewrite as well.

And that's how a 1980s AI program is still in use today.


Is it open source? Why keep it for themselves? Where you doing semantic parsing? What parsing techniques where you using? Where you inspired by a linguistic formal grammar in particular?


the fifth generation computers didn't "run prolog", they were completely new forms of concurrent constraint logic/dataflow hybrid architectures using Prolog-like Horn clause notation for user end programming, which the US Navy implemented successfully around the same time but you don't hear about as much


Source? How can I query this?


Because of strategic challenges, Reusable Scalable Intelligent Systems will be developed by 2025 with the following characteristics:

• Interactively acquire information from video, Web pages, hologlasses (electronic glasses with holographic-like overlays), online data bases, sensors, articles, human speech and gestures, etc.

• Real-time integration of massive pervasively inconsistent information

• Self-informative in the sense of knowing its own goals, plans, history, provenance of its information and having relevant information about its own strengths and weaknesses.

• Close human interaction using hologlasses for secure mobile communication.

• No closed-form algorithmic solution is possible to implement the above capabilities

• Reusable so that advances in one area can readily be used elsewhere without having to start over from scratch.

• Scalable in all important dimensions meaning that there are no hard barriers to continual improvement in the above areas, i.e., system performance continually significantly improves.

A large project (analogous to Manhattan and Apollo) is required to meet strategic challenges for Intelligent Systems and S5G (Secure 5G).

See the following for an outline of technology involved:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3428114


Why do you think intelligent systems is even feasible? Why do you assume it will be developed? And how is this any different than today's social networks? Either it already exists, or there is a significant AI portion that is completely unsubstantiated at this point.


Strategic competition will be crucial because once one side shows that it is possible others will quickly follow.

Do you see any reason that the technology outlined in the article cannot be implemented by 2025?


All the AI technology in the article is promissory technology, i.e. it doesn't exist except by analogy to human intelligence, and specious claims it'll be better than what has come before.

What if all AI tech that has come before sucks because there is a fundamental limit to what algorithms can do, and that limit is much less (infinitely less) than what human intelligence can do?

There is a fundamental assumption behind all AI research that the human mind can be simulated by a Turing machine, and no one has verified that assumption. AI research is just floating on the materialistic bias that the mind is reducible to the matter in the brain. We could very well have a supernatural soul, and thus the materialistic bias is completely wrong.

Thus, I don't see any reason the technology can be implemented by any date.


Agreed. A Turing machine operates in a discrete countable state space, whereas the human brain requires real numbers for a complete state description. Cantor showed (with the diagonal argument) that real numbers are uncountable - so there are (infinitely many!) real numbers that are unreachable using a Turing machine. My suspicion is that AGI, consciousness, perhaps even the supernatural soul you allude to, can be found only in this unreachable state space. There exists a Cardinality Barrier!


Why would the brain require real numbers for a complete state description? Physical reality is discrete and finite, as far as we know.


Every single neuron has many levels of excitement with different ramp-up, cool down, refractory periods just to name a few non-discrete variables. Not accounting for the rest of chemical soup and meyham happening in the nervous system. Thinking that nervous system is discrete and finite is very, very reductionist and wrong, perhaps as if to compare a novel and an alphabet ("why would you need infinite set space to describe all the possible novels? There are only 30 letters in the alphabet").


Exactly this. To be more precise with your analogy, given a finite alphabet and a finite length for any novel, then in fact the set of all novels is countable (and computable) - but when novels can have (countably) infinite length (or are written using an infinite alphabet?), then the set of novels is uncountable (and indeed by construction correspond to the reals, if my understanding is correct).


If the set of all novels is an infinite subset of all finite symbol strings, it may not be computable. This is because the set of all halting programs is not computable, even though each halting program is a finite symbol string. So, we could have a set of novels that enumerate the halting programs (best sellers, those!), and since this subset of novels is not computable, then the set of all novels is not computable.


Ah - but though a subset (bestsellers/halting programs) may not be computable (recursively enumerable) as you say, this does not preclude all novels/finite programs from being enumerated. Taking the alphabet as just the uppercase letters, I can list all novels as A, B, C.. AA, AB, AC... ... and hence eventually reach any novel you may supply. But this tells me nothing of whether they are bestsellers (or halting programs)!


Yes, of course, enumerating all symbol strings is computable.

What I mean is the novels as a whole correctly label the halting machines, an unreasonable assumption to be sure. That would prevent the novels from being computable.

If real novels somehow exhibit an equivalent characteristic, then that would prevent real novels from being computable. Perhaps the logical consistency of good novels would be such a characteristic. We probably cannot depend on novels being perfectly consistent, but still consistent to a greater degree than we can achieve by randomly generating bitstrings.

The objection would be that a finite set of axioms can generate consistent novels, so there is also some other criterion that is necessary to make the novels uncomputable. I would say this second criterion is novelty, defined as an increase in Kolmogorov complexity.

Using a single set of axioms to enumerate an infinite set of novels will hit a novelty plateau, as per the proof of uncomputability of Kolmogorov complexity.

So, based on these two criteria, one can derive a hand wavy argument that novels in general are uncomputable.


It is? Is this at the quantum level? To be honest my understanding of quantum physics is cursory at best, though I had thought that the notion of a 'clockwork universe' had been otherwise discredited.


Maybe the quantum waveform probabilities are irrational numbers. That's the closest thing I know of that might not be finite and discrete. I'm including fractions in my 'finite and discrete' definition.

My understanding is quantum physics disproves the 'clockwork universe' because it is probabilistic instead of deterministic. Nothing to do with using real numbers.

Also, even if the universe made use of real numbers, the universe can still be computable in an asymptotic sense, which is still inadequate to violate the law of information conservation.

Violation of said law requires essentially non mathematical entities, such as halting oracles and non stochastic processes (neither deterministic, random, nor combination thereof). None of which are dealt with in the STEM fields.


A Reusable Scalable Intelligent System (e.g. for pain management or dementia management) is not a human equivalent.

You didn't give any reason why you think that technology in the preprint cannot be implemented by 2025.


To justify the feasibility of the MORI system the preprint draws numerous analogies to human intellectual capabilities like adaptability to a changing information environment and dealing with inconsistent information. Humans can certainly handle these scenarios, but why assume such intellectual capabilities are computable? We have great difficulty emulating any sort of human intellectual ability with computation, with a near zero success rate, and only success in very narrow and highly consistent domains. I don't see any reason why our success rate with AI tech will increase when the domain becomes broad, changes rapidly, and is inconsistent.


What is your definition of intelligent? because I see evidence of intelligent machines all around us.


The machines are intelligent just like a doll is a baby. AI is merely grown up make believe.

I would define intelligence as the ability to increase net information in a closed system. Something no algorithm can do, per the conservation of information.


I find it odd that people get most excited when thoughts of AI are aimed towards education in the classroom as the research hinted. I've always thought the most exciting thing about AI would be making robots that can cover the work needed so all human beings get to focus on what's meaningful for them. The interest as hinted from the article makes me question where peoples priorities are when it comes to the next generation getting to live life.


What if the work needed is the work human beings want to focus on because it’s meaningful to them?

When you have AIs that can do arbitrary work that humans can do, what prevents other humans from simply cutting out the humans that want to do that work for AI?


What if the human mind is not computable? Why does no one test this hypothesis instead of throwing billions of dollars and our brightest minds against an unsubstantiated hypothesis? Why are we so unscientific in testing assumptions when it comes to AI? It is not difficult. I've thought of tests myself. But, the closest I've seen in academic literature is Penrose' microtubules and silly hypercomputation. Nothing with empirical tests. I blame materialistic bias, since if materialism is true then the human mind must be a computation. But, materialism does not need to be true in order to have empirical tests whether the mind is computable.


I don't follow your logic, you're saying that if the human mind isn't computable, all our research of AI is a waste?

What if weak AI systems are extremely useful? What if we don't need to mimic the human mind to create intelligent systems? What if machine intelligence is very different than human intelligence?

We may never build a replica of a human brain. I think it would be absurdly lucky for the Turing model to be the correct one for understanding the human mind since it was developed without much knowledge of how the human mind works. But is that the only way for AI to be successful in your opinion? I don't understand that perspective.


The basic premise behind AI is to capture human intellectual capabilities with computation. If that's not the goal, then it is just algorithmic research by another name. And if we're talking spandrels that result from AI, why not look for spandrels while researching something that is feasible?


Can you prove to yourself that your every behavior isn't the result of a complex chain of mathematical equations? I know I can't.


How do you know you can't? It certainly seems possible to test, at any rate.

Additionally, can you prove the contrary, that every behavior is the result of an equation? If you cannot, why make a hard assumption either way? What does that gain us?


The fact that you can ask the question is, paradoxically, the answer. Your mind is computed by your brain.


How do you know the mind is computed by the brain? Maybe the brain is just an antenna for the mind.


Then why is the mind damaged in weird but predictable ways whenever the "antenna" is damaged.

Or simply, why do people behave differently when they are drunk? I mean, it's only their body that is drunk, not their soul (or wherever the mind supposedly resides), so how can alcohol have an impact on the mind?


Isn't the answer to these sorts of questions pretty trivial? I don't understand why people think such questions are defeaters for the mind != brain hypothesis.

Why does bending my TV antenna have predictable effects on the signal I receive? Why doesn't this imply my TV signal is produced by the antenna instead of received by the antenna?

Answer these questions, then apply your answers to your questions.


When you fix the TV antenna, the movie continues with the original plot. When you get sober, you may not remember what you did when you were drunk.

The act of forgetting is itself interesting. How is the information removed from the immaterial mind, and why things like having enough sleep have an impact on how much the immaterial mind remembers?


Why do you get a bad recording if the camera lens is smudged, if the camera is not the lens? Why do forget stuff more easily if you don't write it down, if your brain is not your notebook?

I still fail to see how these sorts of arguments show the mind must be physical. We have many examples of interfaces, where damaging the interface can impact the transmission, but does not thereby entail the interface is the transmission.

On the other hand, why do amnesia victims sometimes regain their memories, and even more(!), after brain damage if the mind is just the brain? Why can people live normal, even high performing lives, while missing most of their brain if the mind is just the brain? How can we explain out of body experiences, where the person learns information they cannot have learned any other way, if the mind is just the brain? How can consciousness arise from non sentient matter? How can we think about immaterial, abstract concepts if mind = brain?

In my opinion, the arguments for mind = brain largely depend on logical fallacies, as explained above. And, there are also phenomena that are very difficult and even impossible to explain if mind = brain is true. Additionally, I haven't gotten into this, but there are a number of thought experiments that indicate mind = brain is logically incoherent. So, the most parsinomous explanation is that mind != brain.


Let's not leave the tangible positivistic discourse for the alluring sirens' call of metaphysics.


What if metaphysics is tangible?

E.g. here's an experiment I ran with an EEG. Seems pretty tangible.

https://mindmatters.ai/2019/12/playing-tetris-shows-that-tru...


Tangible metaphysics is an oxymoron by definition of metaphysics (something beyond/above physical, i.e. tangible). If it's tangible, it's not metaphysics. If you can subject it to the scientific method, it's not metaphysics. Your EEG experiment is nowhere near metaphysical, it's as positivistic as can get - you very crudely measured cognitive load using EEG. Well, of course, if you consider psychology to be metaphysical, you are only partly true - Freud, for example, can be neither validated, nor invalidated. Skinner or Pavlov, however... and anything published in respectable journals for the last 30 or 40 years.


Why can't the metaphysical interact with the physical and be detected that way? Most of science today is like this, indirect observations of reality we cannot perceive with the naked senses. If the brain is the metaphysical mind's antenna, then my EEG experiment is observing the mind's operations through the waves it generates in the brain. What remains is to determine of those waves can emerge purely within the brain, or if the waves are beamed in from elsewhere, i.e. the mind. Analogous to looking at my TV antenna and determining whether all the TV channels are being generated by the antenna alone, or whether the antenna is receiving them from elsewhere.

I honestly don't understand why these sorts of ideas receive so much friction. I cannot think of logically coherent objections to them. They seem very scientifically testable to me.


> Why can't the metaphysical interact with the physical and be detected that way?

Because then it stops being metaphysical and becomes physical.

Now as to your central problem, conciousness being beamed to brain. The reason why it gets so much friction is that it flies agains a massive amount of tangible evidence and research that brain generates conciousness. It also falls to Occams' razor very easily (like the idea of ether medium did - a PhD in electrical engineering should know this). And in it's core it is basically the same as "soul" - intangible, immeasurable, unknown entity with scientifically unknowable properties.

In order to bring it from the realm of metaphysics (matters of souls, virtues, etc - read Thomas Acquinus and continue towards antica via Augustinus to better understand what metaphysics is and how it is different from scientific knowledge), you have to propose both a full hypothesis on how this "brain as a receiver" works, and a set of reproducible experiments to show your theory to be true.

Jesus Christ, I never thought that I, a person with no degree, will ever explain the basics of epistemiology and scientific method to a PhD on the internet...


I'd be very curious to know your evidence that the brain generates consciousness.


I'd like to know your definition of consciousness.


A necessary property is 'aboutness'. Consciousness is always about something else, or perhaps about itself as in the case of self consciousness. No material object has any inherent reference to anything else. For example: the words in this comment are nothing but pixels on a screen. However, we interpret the words to refer to the content of the idea I'm trying to communicate to you.


One of the panelists (B. Chandrasekaran at Ohio State University) was my dad's PhD advisor (my dad is Ashok Goel at Georgia Tech). Pretty cool to come across his name on HN!


AI is going to be huge no doubt. However, in my opinion there would likely be some costly mistakes made before humans can reap its full benefits. We have been seeing a lot of AI developments but in reality it hasn't really brought us many meaningful changes as we had expected. In general our daily lives still remain pretty much the same as before. Our civilization has never experienced significant AI impacts at a large scale so mistakes may be hard to avoid, and it will serve as lessons for later generations not to repeat those same errors.

I have noticed human emotions and intelligence seem to be at odds with each other. Sometimes they are even a trade-off. The increase of one may lead to the decrease of the other. If we look around, humans today have the most advanced technologies in history, but are our lives really better compared to people's in the past? Materialistic wise, certainly yes because they are products directly produced by technologies, but mentally and emotionally it could arguably be worse.

AI and techs keep getting better and better everyday, but then human have to work more with longer hours and higher stress. We all thought the machines are supposed to help us human but it's actually the other way around. We work tirelessly days and nights in order to keep making those machines better and more advanced, but in return our lives have not seen many meaningful improvements, and even arguably worse than before in some areas. Individually our personal ability has limits and naturally it evolves very slowly, but the power of AI machines is potentially unlimited and growing at an even faster rate than Moore's law. We seem to be collectively working to make machine much better than us while we are remaining relatively the same individually. Are technlogies actually enslaving us?

We keep buying things that don't really serve us much. We have a lot of stuff now but they don't mean much. If something broke, meh we will just get another one. It's just another item and it will get shipped here tomorrow. We didn't have as much in the past but every little thing carried much greater value. Even the most simplest thing could fascinate and brought us joy.

We humans today already operate based on rules and algorithms dictated by the machines. We still don't know how our brains function organically (memory, consciousness, etc...), but in the quest of trying to make AI becoming human-like, we have created AI neural networks to simulate our brain. The danger is that even though we still don't know how our real brain functions, but we have now turned around and claimed that the human brain works in a similar way under the same principles of an AI neural network. We are enforcing AI rules onto ourselves.

This is a dangerous assumption to make simply because AI does not have emotions. Once we begin to operate strictly under these rules and principles that are dictated by AI, we would soon lose the attributes and characteristics of what made us human. Our emotional spectrum may get increasingly shorten.

TV shows and movies are an example as they are a form of story telling that have biggest influences on us at the emotional level. It's no coincidence that "Seinfeld" and "Friends" are still the two best tv shows today. Many movies that are considered as best were also made from a while ago. Despite the most advanced technologies, why is it that today we can't seem to tell stories that bring out the same level of emotional reponse and intensity as before? They all seem to lack the genuity and inspiration that the previous generation once had.

Is it because AI do not understand human emotions so its algorithms cannot accurately factor that into consideration? One can say that today humans are the ones who write those algorithms so maybe we can add in compenents to account for that? But just like the example above, if we don't even understand how our brain works, how can we simulate the machine to accurately reflect us? In the future, machines are supposed to learn and write all the codes by itself without human intervention, what would likely happen then? Would we still retain the ability to even understand those codes? Would it possible that human may slowly evolve into machines? In trying to make those machines becoming like us, we may instead become like machines.


Dark ages of AI is a meme like dunning krueger effect


This time it is different.^{tm}

More seriously, we are almost about to

i) Generate a good book using a short intro.

ii) Generate a meaningful video using a few photos and a basic text scenario.

Which makes us closer to generate movies on demand (say in 2025) and then good luck to people claiming that the current progress in AI is a bubble.


We’re no where near a “good book” (we can’t even do a good 3 paragraphs reliably), nor a “meaningful video.”

You’re confusing a 1000 monkeys (transformers) with 1 human intelligence (AGI). That giant leap hasn’t been met.


Have you seen the tables in Appendix E (starts p. 16) of the Transformer-XL paper? I think they're pretty good.

https://arxiv.org/pdf/1901.02860.pdf


Quote from that paper:

> The Battle of Austerlitz was the decisive French victory against Napoleon

It didn't even catch that Napoleon was the leader of the French as described in the source snippet. And this was when it just generated text of similar size as the input. Based on just that paper I highly doubt that this method will generalize to creating entire books.


"he was still in conflict with Emperor Napoleon, the French Republic’s king"


My point was that the text lacked coherence, that quote only makes it worse. If it can't even keep coherence for a single page how would it manage for a hundred?


by improving metrics over time

see, e.g., https://gluebenchmark.com/leaderboard/


Oh fuck, I am banned here from getting points. Time to avoid selling my own profile to whoever sniff on HN users.


You're not banned here.


Glad to find they used something from my research. Don't want to disclose more, was flagged too much.


I guess you are not from the field.

Edit: I was right, he is an engineer and not a researcher who is active in the field.


Please don't make personal dismissals. That's not in the spirit of this site, and we already had to ask you about this: https://news.ycombinator.com/item?id=21615955.

Instead, if you know more, share some of what you know. Then we all can learn something.

https://news.ycombinator.com/newsguidelines.html


Especially since I do have a background in AI and machine learning.


[flagged]


Care to link the papers that makes you so sure that we will soon be able to generate whole coherent books? As far as we know you could just be an enthusiastic college student who knows nothing.


[flagged]


"You're confusing X with Y" uses a personal pronoun but I don't think I'd call it a personal attack—it's a common phrase in spoken English. Maybe "That's confusing X with Y" would be a modicum more polite, but only a modicum.

Please stop these low-information, bilious comments now. It's not contributing positively to sniff at others' lack of credentials and tout yourself. Experts are welcome here if they want to share their expertise, but just being dismissive isn't a good thing for any HN user to do, expert or no.


You talk as if you have information to share, why don't you share it?


If you think A and I think B, I am allowed to say B without proving it to you. You can say that my claim is unsupported but by insulting me in a parallel thread you learn nothing. Be open-minded and assume that what you see in open is not everything primarily due to different compute capabilities available to different actors.


[flagged]


Above, AlexCoventry mentioned a paper. If you would like to find more, go to Google Scholar and find all papers which cite that paper. This would be just a part of the whole story since bigger datasets and models are not in open. The progress rate in this field is incredible.


Quote from that paper:

> The Battle of Austerlitz was the decisive French victory against Napoleon

It didn't even catch that Napoleon was the leader of the French. And this was when it just generated text of similar size as the input. Based on just that paper I highly doubt that this method will generalize to creating entire books.


AI winters are caused because people keep thinking the effort required to achieve some improvement is linear instead of exponential.


I mean, even if we were to get nothing else out of the current AI boom, it'd e hard to call it a bubble as much as the 80s crash.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: