Hacker News new | past | comments | ask | show | jobs | submit login

Let’s call it machine learning, the AI term is just so far from what it actually is





He's talking about control theory, and other kinds of optimisation systems too. I think AI is a fine blanket term for all of that stuff.

Rodney Brooks is the Godfather of Out of Control Theory.

FAST, CHEAP AND OUT OF CONTROL: A ROBOT INVASION OF THE SOLAR SYSTEM:

https://people.csail.mit.edu/brooks/papers/fast-cheap.pdf


"Out-of-Control Theory" is a great name, love it =) thanks for the link, although I guess the idea never really took off.

It certainly did take off, it's called "Subsumption Architecture", and Rodney Brooks started iRobot, who created the Roomba, which is based on those ideas.

https://en.wikipedia.org/wiki/Subsumption_architecture

Subsumption architecture is a reactive robotic architecture heavily associated with behavior-based robotics which was very popular in the 1980s and 90s. The term was introduced by Rodney Brooks and colleagues in 1986.[1][2][3] Subsumption has been widely influential in autonomous robotics and elsewhere in real-time AI.

https://en.wikipedia.org/wiki/IRobot

iRobot Corporation is an American technology company that designs and builds consumer robots. It was founded in 1990 by three members of MIT's Artificial Intelligence Lab, who designed robots for space exploration and military defense.[2] The company's products include a range of autonomous home vacuum cleaners (Roomba), floor moppers (Braava), and other autonomous cleaning devices.[3]


Okay, interesting, but I meant nobody actually ended up sending thousands of little robots to other planets. No doubt the research led to some nice things.

Edit: the direct sensory-action coupling idea makes sense from a control perspective (fast interaction loops can compensate for chaotic dynamics in the environment), but we know these days that brains don't work that way, for instance. I wonder how that perspective has changed in robotics since the 90s, do you know?


About four years before Rodney Brooks proposed Subsumption Architecture, some Terrapin Logo hackers from the MIT-AI Lab wrote a proposal for the military to use totally out-of-control Logo Turtles in combat, in this article they published on October 1, 1982 in ACM SIGART Bulletin Issue 82, pp 23–25:

https://dl.acm.org/doi/10.1145/1056602.1056608

https://donhopkins.com/home/TurtlesAndDefense.pdf

>TURTLES AND DEFENSE

>Introduction

>At Terrapin, we feel that our two main products, the Terrapin Turtle ®, and the Terrapin Logo Language for the Apple II, bring together the fields of robotics and AI to provide hours of entertainment for the whole family. We are sure that an enlightened application of our products can uniquely impact the electronic battlefield of the future. [...]

>Guidance

>The Terrapin Turtle ®, like many missile systems in use today, is wire-guided. It has the wire-guided missile's robustness with respect to ECM, and, unlike beam-riding missiles, or most active-homing systems, it has no radar signature to invite enemy missiles to home in on it or its launch platform. However, the Turtle does not suffer from that bugaboo of wire-guided missiles, i.e., the lack of a fire-and-forget capability.

>Often ground troops are reluctant to use wire-guided antitank weapons because of the need for line-of-sight contact with the target until interception is accomplished. The Turtle requires no such human guidance; once the computer controlling it has been programmed, the Turtle performs its mission without the need of human intervention. Ground troops are left free to scramble for cover. [...]

>Because the Terrapin Turtle ® is computer-controlled, military data processing technicians can write arbitrarily baroque programs that will cause it to do pretty much unpredictable things. Even if an enemy had access to the programs that guided a Turtle Task Team ® , it is quite likely that they would find them impossible to understand, especially if they were written in ADA. In addition, with judicious use of the Turtle's touch sensors, one could, theoretically, program a large group of turtles to simulate Brownian motion. The enemy would hardly attempt to predict the paths of some 10,000 turtles bumping into each other more or less randomly on their way to performing their mission. Furthermore, we believe that the spectacle would have a demoralizing effect on enemy ground troops. [...]

>Munitions

>The Terrapin Turtle ® does not currently incorporate any munitions, but even civilian versions have a downward-defense capability. The Turtle can be programmed to attempt to run over enemy forces on recognizing them, and by raising and lowering its pen at about 10 cycles per second, puncture them to death.

>Turtles can easily be programmed to push objects in a preferred direction. Given this capability, one can easily envision a Turtle discreetly nudging a hand grenade into an enemy camp, and then accelerating quickly away. With the development of ever smaller fission devices, it does not seem unlikely that the Turtle could be used for delivery of tactical nuclear weapons. [...]


See why today's hackers aren't real hackers? Where are the mischievous hackers hacking Roombas to raise and lower a pen and scrawl dirty messages on their owners' clean floors? Instead what we get is an ELIZA clone that speaks like a Roomba sucked all the soul out of the entire universe.

This is too great. Thanks for the link!

There will never be, and can never be, "artificial intelligence". (The creation of consciousness is impossible.)

It's a fun/interesting device in science fiction, just like the concept of golems (animated beings) are in folk tales. But it's complete nonsense to talk about it as a possibility in the real world so yes, the label of 'machine learning' is a far, far better label to use for this powerful and interesting domain.


I'll happily engage in specifics if you provide an argument for your position. Here's mine (which is ironically self-defeating but has a grain of truth): single-sentence theories about reality are probably wrong.

I just went back and added a parenthetical statement after my first sentence before seeing this reply (on refresh).

> The creation of consciousness is impossible.

That's where I'd start my argument.

Machines can 'learn', given iterative training and some form of memory but they can not think nor understand. That requires consciousness, and the idea that consciousness can be emergent (which it is my understanding that the 'AI' argument rests upon), has never been shown. It is an unproven fantasy.


I don't see why consciousness is necessary for thinking.

I would define thinking as the ability to evaluate propositions and evidence and arrive at some potentially useful plan or hypothesis.

I see no reason why a machine could not do this, but without any awareness ("consciousness") of itself doing it.

I also see no fundamental obstacle to true machine awareness either, but given your other responses, we can just disagree on that.


And I guess I'd start my retort by saying that it can't be impossible, because here we are talking about it =P

No, you don't have any proof for the creation of consciousness, including human consciousness (which is what I understand you are referring to).

In my view, and in the view of major religions (Hinduism, Buddhism, etc) plus various philosophers, consciousness is eternal and the only real thing in the universe. All else is illusion.

You don't have to accept that view but you do have to prove that consciousness can be created. An 'existence proof' is not sufficient because existence does not necessarily imply creation.


The Buddha did not teach that only consciousness is real. He called such a view speculative, similar to a belief that only the material world is real. The buddhist teaching is, essentially, that the real is real and ever changing and that consciousness is merely a phenomenon of that which is reality.

Cheers, I should have been more precise in my wording there. I should have referred to some (Idealist) schools of thought in Buddhism & Hinduism, such as the Yogachara school in Buddhism.

Particularly with Hinduism, its embrace of various philosophies is very broad and includes strains of Materialism, which appears to be the other person's viewpoint, so again, I should have been more careful with my wording.


Look, I don't want to get into the weeds of this because personally I don't think it's relevant to the issue of intelligence, but here's a list of things I think are evident about consciousness:

1. People have different kinds of conscious experience (just talk to other humans to get the picture).

2. Consciousness varies, and can be present or not-present at any given moment (sleep, death, hallucinogenic drugs, anaesthesia).

3. Many things don't have the properties of consciousness that I attribute to my subjective experience (rocks, maybe lifeforms that don't have nerve cells, lots of unknowns here).

Given this, it's obvious that consciousness can be created from non-consciousness, you need merely to have sex and wait 9 months. Add to that the fact that humans weren't a thing a million years ago, for instance, and you have to conclude that it's possible for an optimisation system to produce consciousness eventually (natural selection).


Your responses indicate (at least to me) that you are, philosophically, a Materialist or a Physicalist. That's fine; I accept that's a philosophy of existence that one can hold (even though I personally find it sterile and nihilistic). However many, like me, do not subscribe to such a philosophy. We can avoid any argument between ppl who hold different philosophies but still want to discuss machine learning productively by using that term, one that we can all agree on. But if materialists insist on using 'artificial intelligence' then they are pushing their unproven theories and I would say fantasies on the rest of us and they then expose a divergent agenda, where one does not exist when we all just talk about what we all agree we already have, which is machine learning.

If you find it sterile and nihilistic that's on you, friend =)

I think pragmatic thinking, not metaphysics, is what will ultimately lead to progress in AI. You haven't engaged with the actual content of my arguments at all - from that perspective, who's the one talking about fantasies really?

Edit: in case I give the mistaken impression that I'm angry - I'm not, thank you for your time. I find it very useful to talk to people with radically different world views to me.


Every scientist has to be a materialist, it comes from interacting with the world as it is. Most of them then keep their materialism in their personal life, but unfortunately some come home from work and stop thinking critically, embracing dualism or some other unprovable fantasies. If you want people engaging with the world and having correct ideas, you need to tolerate materialism because that's how that happens. It offending your religious worldview is unrelated.

> Every scientist has to be a materialist

Quite obviously not true (and also untrue in a documented sense in that many great scientists have not been materialists). Science is a method, not a philosophical view of the world.

> it comes from interacting with the world as it is

That the world is materialistic is unproven (and not possible to prove).

You are simply propounding a materialist philosophy here. As I've said before, it's fine to have that philosophy. It's not fine to dogmatically push that view as 'reality' and then extend on that to build and push fantasies about 'artificial intelligence'. Again, we avoid all of this philosophical debate if we simply stick to the term 'machine learning'.


I'm not sure generic terms like "artificial intelligence" which describe every enemy in a video game are bound by a particular philosophy.

You can start to refine your incorrect thinking on this by decomposing these associated characteristics and terms rather than conflating all of them.

Thinking, understanding, self-aware, awake, integrated sensory stream, alive, emotional, visual, sensory, autonomy, adaptiveness, etc.


This is a category error, and basically a non-sequitur to the discussion. Whether or not LLMs are conscious is not the topic of discussion (and not a position I've seen anyone seriously advocate).

> There will never be, and can never be, "artificial intelligence".

And machines can't learn.


Can machines fly by themselves?

No

I like using AI or generative AI. Each term we use signifies the era of the technology. When I was starting, expert systems were the old thing, AI was a taboo phrase that you avoided at all costs if you wanted funding, and ML was the term of the day. Then it was the deep learning era. And now we are in the generative AI era - and also an AI spring which makes the term AI appropriate (while the spring lasts).

No.

AI is what we've called things far simpler for a far longer time. It's what random users know it as. It's what the marketers sell it as. It's what the academics have used for decades.

You can try and make all of them change the term they use, or just understand it's meaning.

If you do succeed, then brace for a new generation fighting you for calling it learning and coming up with yet another new term that everyone should use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: