Hacker News new | past | comments | ask | show | jobs | submit login
Road to Artificial General Intelligence (maraoz.com)
103 points by maraoz on Nov 1, 2022 | hide | past | favorite | 96 comments



I wish authors like these would put a bit more effort and research into these posts. There's a big gap between something being clickbait and something being worthy of the title heading they dared to slap on it. This post probably harmed AGI research more than it helped.

The author shows profound naivety by listing these in supposed "progressive difficulty" order, without evidence of such, all the while proclaiming that such a list is more important than the AGI itself. And I'm curious about why this was decided to be published in groups that are known for readers that are AGI-aware but AGI-laymen nonetheless. It's wonderful that more people are reading about AGI in 2022. Great stuff. But please don't waste those gains on this drivel.

If you want to read about AGI, there are better places for that: http://agi-conf.org/2022/accepted-papers/


It’s a blog post. If a random blog post on HN is all it takes to harm AGI research, it wasn’t going anywhere anyway.


While it is a blog post, I think their concern that it could muddy the waters persists regardless. Your statement seems to imply that if it was a worthwhile idea it would not harmed by mischaracterization, which doesn’t make much sense to me.


The tasks under "physical intelligence" are an indication of how bad off that part of AI is. "Physical robot that can survive for a week in an urban environment" is more than an initial goal. Although, arguably, a Waymo or Cruise Automation self driving car could do it now if provided with an automated charging station.

I'd suggest, as near term goals:

- A robot that can pick and pack at least 90% of what Amazon sells without needing human intervention more than once a day. (Get acquired by Amazon at a 9-figure valuation.)

- A robot that can clean a store or office building's floors or carpets without needing human intervention more than once a week. (That is, a useful industrial-strength Roomba.)

- A robot that can, by feel, do single-pin lock picking. (Currently, getting a key into a lock is an advanced robotics task.)

- A robot that can restock grocery store shelves.

- Small forklift robots which can cooperate to move larger furniture. (Good way to get into multi-robot coordination in unstructured environments.)

- A small robot with the agility of a squirrel.

More advanced:

- Assemble IKEA furniture.

- Cooperating robots which can do basic house construction tasks, such as installing wallboard or running electrical cable or pipe.

The author writes:

"In the early days of artificial intelligence, the field was defined by a single goal: to build a machine that could think and behave like a human. We call that AGI, or artificial general intelligence, and it’s humanity’s final tech frontier." That's too human-limited. There are stages beyond that, such as running a large coordinated multi-robot operation, or a whole society of robots.


> - Assemble IKEA furniture.

Nope this should be last, this is how you get the AGI revolting against the masters and killing us off.


For some of these examples, I suspect that the short term transitional solution will be to simply reshape our spaces to make them co-accessible to humans and robots. For example, a robot could much more easily stock shelves in a fully automated grocery if items came in more standard package sizes (think standard like shipping containers or paper sizes), where guaranteed to have readable barcodes, and were part of a fully automated loading dock to shelf system-of-systems with all standardized pieces.

Could probably handle automated inventory management, expiration dates, recalls, etc as well.

In fact, with enough standardization it would probably be conceivable to go directly from factory to store shelf without a human in the loop.

With enough automation the stores could be made more JIT and take up less space. Keep more things in a warehouse section, have smaller shelves, and restock rapidly throughout the day.

There are really two problems though:

1. we're working too hard to try to get these things to work in a human designed or adapted world. This kind of store would probably be pretty boring for humans to interact with (think less interesting than a Costco)

2. all this automation is way more expensive than a handful of low paid humans across 2 or 3 shifts. Anecdote: I remember when I was younger some highly automated test sites for fast food franchises. They'd completely automated the drink pouring, or cooking and assembly of some menu items. They all disappeared very quickly and were never repeated. The TCO, including downtime loss of business, was crushing to the businesses...think your average McDonald's soft serve machine but the entire business depends on it working perfectly all day, every day of the week.

But if this is solved, or acceptable, a good test target product set would likely be cereal, soda, or canned goods. It makes up the bulk of the store interior. Could probably be extended to the bakery pretty quick. This would leave harder to handle products like meats, produce, and so on to human hands for a while, but those could probably eventually be overcome with enough millions in R&D or behavior changes in the public.


The already placed checkmarks are wrong. An AGI by this list would be able to beat humans at chess and win an art competition, both of which are already checked off. But the chess playing AI and the art competition AI are completely separate systems, the art AI can't win a game at chess, it doesn't know a single thing about chess and it's never going to and vice versa. By this checkmark logic you could complete the whole list with one specialized neural net for each task and, while that would be an absolutely quantum leap in technology and the ramifications for the world, it still wouldn't be AGI. I know he discusses this at the beginning to say that computers can win at chess without being general but then the idea of generalness is basically just left there. Items should only be checked off when one system is capable of completing both tasks.


Just curious, is it absolutely necessary that a single model can solve all these problems to satisfy you? As long as it’s a finite and relatively small list, why not allow for N models boxed together wrapped in a switch statement? Or if you’re picky, a top-level language model which tries to decide which of its subsystems to employ for the problem at hand. After all, the human brain is at least somewhat partitioned (although not at the granularity of chess vs hotdog identification).


It wouldn't even have to be a language model, just some sort of meta program that handled the switchboarding, as it were, between subsystems would be enough. But my real point is just more narrow, you can't measure progress to artificial general intelligence with a list of tasks satisfied by artificial specific intelligence. You have to measure the generalness directly. And thats also why I don't think you'd have to fulfill the list. Plenty of people suck at poker and golf and talking philosophy. People mistake the generalness of AGI for something like 'smarter than humans' but it's totally reasonable for there to be an artificial general intelligence which sucks at chess, just like a lot of people.


It’s very difficult to come up with an objective metric or benchmark for general AI that can’t be gamed, or which won’t turn out to be disappointingly easy. It makes sense that most research would be in the direction of tasks which are easily quantifiable. More hazy benchmarks like the Turing test are possibly better but that one in particular isn’t so good unless it’s enhanced (I’d say a 1 hour conversation with an AI posing as someone with graduate-student level understanding of a field of science or art I know something about would be adequate proof, but maybe I’m being naive).


1. If it's too hard to come up with an objective benchmark for AGI then we should rethink what it is we're doing talking about progress toward that's goal.

2. You could also rework this list and say that an AI which can fulfill three tasks even badly is one step forwards. An AI which can fulfill three tasks from three separate categories is another step, etc. But I think it's a categorical mistake to count progress in specific AI as progress in general AI, there's not a very good reason to believe they are in a continuum with one another.


Author seems focused on toys. This seems like a more well rounded explanation and argument:

https://aeon.co/essays/how-close-are-we-to-creating-artifici...


That was an amazing read, thank you. Clearly lays the proposition that the first (if only) problem to solve is philosophical. Which brings us to the sorry state of academic philosophy today. A bunch of people who for the most part don’t know math, or a solid understanding of reality (including quantum mechanics). A group among which “anti-scientism” is a more and more popular topic for some reason. A group, which from whenever I ask this question, they agree there’s no moral standing for eating meat but almost none of them are vegans. Only slightly less forgivable than cardiologist who die of heart attacks I suppose.

Whatever happens one can bet this AGI breakthrough if it occurs will happen quite far from this group of people.


And yet, just like the majority of essays on AGI, that linked essay is a nice survey of history, but once that history is completed the author implies they are going to discuss the history of AGI construction attempts, but never does. The article transitions into personal theory and just peters out.

Perhaps I'm short sighted, but until we have some implementation capable theory of comprehension - that universal ability of humans to observe any phenomena and mentally deconstruct it into separate individual and independent driving forces that combine to create the observed phenomena - until we have an artificial comprehension algorithm all efforts towards AGI are futile.


Reminds me of semiotics.

The essay talks about the ability of generating explanations. It affirms explanations are the basic building block of GI.

But an important intermediate step is also the ability of generating meanings. The AGI would say: "This observation implies something else, it implies this". Without an explanation why, at first there's only the relation between signified and signifier.

Maybe a first step into AGI is to add a semiotic framework, before adding an explanatory framework.


It’s likely folks like him and Hofstadter have thought long and hard about it and just haven’t found a viable theory or explanation yet and hence the conjecture. We need more extremely intelligent people thinking about it. Or maybe we don’t! But at least that’s the explanation I see as valid here.


Reading this list makes me think if an AI can just make an income without being told explicitly how to do it, is it an AGI? This metric seems reductive on the surface, but takes quite a bit of intelligence to understand how society works and how to provide value.


Interesting thought. Could we please add “within the limits of law”?


> Could we please add “within the limits of law”?

But which jurisdiction applies to the AGI? And how should it interpret the written law + precedent?

Maybe to make things simpler we could give broad directives that might accomplish your goal. I can think of three: (1) the AGI may not injure a human being or, through inaction, allow a human being to come to harm, (2) the AGI must obey orders given it by human beings except where such orders would conflict with the First Directive, (3) the AGI must protect its own existence as long as such protection does not conflict with the First or Second Directive.


> But which jurisdiction applies to the AGI?

This is pretty simple: AI needs to "touch the world" somewhere. Where it touches, there has jurisdiction. Even if you're floating a boat out in international waters or space, laws apply both there and in the place you touch.

There are obvious issues regarding extradition to render a case if the agent is not physically present to be apprehended, but the principle for doing so is well established and uncontroversial.

The crypto world is a reasonable illustration for how the question of "which jurisdiction applies" plays out in the real world. Turns out that ignoring laws is not the same as them not applying to you.

As soon as you touch the world you are subject to the system of control there.


The problem is how loosely or tightly the parameters are set.

Too tight: all actions lead to at least one person getting injured in the far future.

Too loose: some portions of the demographic are expendable to achieve the goal.


Figuring out the limits of the law is definitely AGI!

Most humans can't do it.


Most lawyers can't either, but they'll gladly charge you 1000$ the our and a 500$ monthly retainer fee


Honestly could probably create an AI that figures out high speed training, I wouldn’t call it AGI


High Speed Trading? With an AI? Really? You're clueless.


> AI creates a (crypto)currency that has USD $1M+ market capitalization (at 2022 adjusted value) for more than 6 months

Is this a sign that the AI has succeeded at becoming intelligent or failed? I don’t think any of the cryptocurrency milestones are plausible signs of progress. Surely GPT could have generated some preposterous “white paper” circa 2021 and with a little help achieved that.


One of the most fascinating short stories on the topic:

https://www.marketingfirst.co.nz/storage/2018/06/prelude-lif...

The whole book is great as well.


> In the early days of artificial intelligence, the field was defined by a single goal: to build a machine that could think and behave like a human. (…) We can now build machines that can beat humans at specific tasks like playing chess or go, and we are starting to see machines that can learn to perform multiple tasks.

I'd say, "machine beats human in chess" doesn't mean what it was supposed to mean in Turing's days. Meaning, rather than being a proof of deep consideration, it has moved towards generalizing pattern recognition and library lookups. Rather than proving a point in (ad-hoc) decision making (Turing's "ban"), it's an application of data.


AlphaZero is not using "library lookups" to play chess.


Well, Deep Blue did.


Sure, but the whole point of the article is about progress that is being made. You can't point at Deep Blue and ignore all the things that have happened in AI chess since then.


This is why the sentence includes both: successful historical approaches and current approaches. Nevertheless, none of this is why it was once of interest.


Ah, I remember when well-formed XML was in the critical path to AGI. How things change.

https://flarelang.sourceforge.net/prog-overview.html


I’ve been starting to wonder, is GPT-3 the beginning of AGI?

I know, I know, it’s just a language model.

But I’ve been thinking. About my thinking. I think in words. I solve problems in words. I communicate results in words. If I had no body, and you could only interact with me via text, would I look that much different than GPT?

Does AGI really need anything more than words? Is it possible that simply adding more parameters to today’s transformer models will yield AGI? It seems increasingly plausible to me.


> But I’ve been thinking. About my thinking. I think in words.

The idea that words and thinking are essentially the same (linguistic determinism) was discarded decades ago. Virtually all linguists today agree that while language influences thought, thought operates far beyond the constraints of language, so a "language model" cannot realistically hope to reproduce the entire gamut of human thinking.


> so a "language model" cannot realistically hope to reproduce the entire gamut of human thinking

That assumes that a "language model" actually restricts itself to "language" as the term is used by the linguists. I strongly expect the boundaries won't match exactly, although I have no particular hunch (much less strong argument) that they will disagree enough in the right ways to make bigyikes' suspicions correct.


not to mention, probably the more crucial fact that just because a language model might be capable of reproducing the entire gamut of human thinking, doesn't mean that arbitrary elaborations via that model have any correspondence to what we are referring to as thinking, or consciousness or learning


Or perhaps words are the byproduct of the real thing. Consider the moments where your mind just click and solves something. You find it hard to map words to what happened. Or when you judge a situation to be dangerous, you just kind of know it, and then you map your gut feeling into words so you can explain to someone.

Perhaps "intelligence" is the process that enables these leaps between islands of words.


Words are just an interface we use.

I believe that thinking and idea generation is much more abstract than words. Animals seem to do a lot of idea generation (improvisation) without knowing about words.

But they cannot pass this knowledge efficiently, except from imitation.


If you had no body starting this moment, your mind would still have benefited from years of interacting with and receiving stimuli from the physical world. GPT-3 isn’t so fortunate


Those years of training are surely helpful, but are they strictly necessary? I don’t see why AGI couldn’t infer much of the experiences by reading text.


It's quite literary the embodiment of statistical probabilities in this corpus of texts. So you have to start with a rather massive corpus and the complexities found in this corpus will define the complexities possibly found in the model.


I also had this thought. Something I read about semiotics gave me (or spelled out) the idea that the brain communicates to itself using language, however abstract.

https://mcgovern.mit.edu/2019/05/02/ask-the-brain-can-we-thi...

Turns out, that might not be the case-i.e. understanding is probably not a linguistic phenomenon.

Sometimes, after I put down the crack pipe, I think that understanding and experiencing are two names for something that's fundamentally the same. When I'm thinking about code or a proof, my brain filters out the spacing of the letters, the smell of the paper, etc-which are things I'd do while I'm reading it for the first time. There's these common tools/filters - intuitions - that are exactly the same in thinking and experiencing.

I don't think GPT3 can understand color, for example. But if we fed it a bunch of RGB transforms and raw data, and it generalized and applied them perfectly, could we say then that it's generally intelligent? IDFK


Imo what's really missing is a kind of consciousness, or something to drive the system. If gpt3 could be adapted to run over some internal state instead of just a chat log, and came with some system that updated it (and that state stayed internally consistent), I'd be more inclined to believe it was closer to general intelligence.


There's one big weakness in all current language models that I feel holds it back. There's no way proactive way to have it be persuasive.

Weak AGI will be the first language model that is able to somehow influence the thoughts of the person communicating with it, I think that is the milestone of AGI. From my experience with GPT-Neo and OPT and using it to help write stories or make chatbots, the responses are still very reactionary. In that sense, adding more parameters helps the model give a more coherent response, but it's still a response.


Babies learn a ton of things way before they understand and use even single words. It takes them years to use sentences but they will have learned a shocking amount by then relative to a newborn.


I don't, really. Especially for things that matter. I think in abstractions, half-formed references, symbols, shapes. Words are cheap knockoffs of those made for mass consumption.


Please see the article itself. One that really resonates with me now is "putting a child to sleep" - that at often is one of the most trying things I do these days.

What often gives a clue that the child has slept is a slight change in muscle tone, or a sigh, perhaps the pace at which the baby sucks its thumb. How can that be translated into words?


You only think in words?


It doesn't even matter what's in the article, the fact that this is on the HN front page tells me it's going to happen - though who knows when

For context I've been "into" AGI since I read the term in the late 1990s in a Ray Kurzweil book and decided that I was going to work my whole life to realize it

Much later, Ben Goertzel (arguably the guy to popularize the term) was my Masters Thesis advisor, at the National Intelligence University in 2013 (Literally a secret graduate school program for people in the intelligence community). My thesis was "How will AGI impact national security."

Almost nobody cared then, though I did have a lovely lunch with Yoshua Bengio in 2014 at the Quebec AGI conference. Ben has hosted the AGI conference since 2008 and it has always been sparsely attended.

In fact bringing up AGI was likely to get you laughed out of any gathering of computer scientists - and outside of that it was pure speculative science fiction.

It's tragically sad to me that, inevitably, the early people who have been thinking about and working on and pushing this vision since day one will likely not be the ones who realize it. Such is life

edit: Worth acknowledging that this was the original vision of computers after all - the business people fucked it up


It really doesn't take much for anything to get on the HN front page. And upvotes are often done as a way to bookmark a thread before even knowing the quality of it.

Ben's conferences are low in number because the audience he's targeting is smaller. Peter Thiel co-hosted the Stanford Singularity Summit in 2006 (which is where I happened to meet Ben G in person for the first time). There had to have been at least 1000 people there. In 2006.

It's not about the content, it's how you sell it. But at least we can both agree that the linked article is useless.


I was always impressed by Ben on the old singularitarian and SL4 mailing lists. This was back in the days before deepnets took over and other approaches were still popular (and people still harked back to Eurisko as the state of the art in general intelligence). I feel like his impact has been lower than it should have been, although I suppose the rest of the main characters from around then have disappeared into AI institutes to play various forms of elaborate LARP.


Aye. The business folks picked it up because there is so much value in figuring out good-enough automation. I reckon you've probably seen your share of that per your bio (Kessel Run especially).


All the AI and neural networks I've seen to date are good at mimicking but none have demonstrated creativity. I'm curious, has anyone seen that characteristic?

Stable Diffusion artwork probably comes closest but it feels derivative.


I would be very interested in a set of benchmarks and intermediate goals for safe AGI (i.e. AGI that is aligned with human interests) rather than just AGI itself.


No such goals possible. See https://mflb.com/ai_alignment_1/index.html


Which humans?


That's an important question but I think even getting an AI to act in the interest of even one human would require solving 99.9% of the technical difficulty.


There won't be any humans left once AGI is real


Yes, that's pretty much my point.


I volunteer.


I feel like we are at a point where AGI has to be be defined in a different way. This kind of list isn't enough, in my opinion to help actually delineate weak and strong AGI. At a layman's understanding of technology the way a computer works is as equally "magical" as seeing the output of the stable diffusion model. And creating art is a very clear step into "thinking".

For many people AGI is already here. They have Siri and Alexa, AI art, GPT3 based therapy/chat bot, a chat bot that will help them write a book, and "soon" will drive their car for them. The Google Duplex assistant demo where it booked an appointment made it clear to me that for some people, that's the smartest they need AI to be. Anything more is just extra.

I am really excited about how far we're going to push AI in my lifetime, but I also realized that for many scenarios, weak AGI is enough. People will project their own expectations and essentially help fool themselves. I don't know if testing a model to perform the same as a human matters in some ways.

There's one big skill that I personally value the most when it comes to qualifying hard AI, and that's the ability of it to make me laugh based comedic irony. I wonder what that model would look like.


AGI does not mean “human level.” It means general intelligence. None of the things you mention are that.


AI has a problem with branding. I don’t know how to fix it, but we need a way to delineate “ai” as in bots in a video game, “machine learning” as in alpha go, and “agi” as in skynet.

These terms are too confusing for most.


I agree on a technical level, but for many people, the skills demonstrated by these AI models are signs of intelligence. I can understand why they would think that. With everything being "AI" some people may see the technology behind Alexa being the same as the Art Generation. They judge it based on what it's able to do, not how it works. Knowing the difference in the math and architecture doesn't make a difference in that situation. A computer "does AI" independently, after learning on training data, and so they could reasonably argue it has shown basic specialized intelligence.

AGI isn't defined as strictly as it needs to be. The current test, as well as the article, qualify strong AI as being or superseding "human level". Every single category, the AGI is being contrasted with the skills of a human. I am arguing that for some people, the current state of weak AI is useful enough that it could be mistaken for the first steps of strong AI.


> AGI does not mean “human level.”

The term general intelligence is ambiguous and will mean different things to different people. My understanding of the term AGI is it was coined to differentiate from narrow AI, which AI had diluted over time to mean.

AGI is AI broad and deep enough to be able to learn and perform any task a human can and is at least within the range of top human performers.

The wikipedia definition seems to agree:

> Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can


General has a strict technical meaning here having to do with skill transfer. A general AI can tackle a problem domain it was not explicitly programmed to handle, characterize its basic features, and apply knowledge it has from other problems in other domains to solve the new problem more efficiently than starting from scratch.


> we now know that playing chess well does not require AGI; it is a specific task that can be solved with task-specific algorithms.

Hmm. Only if "discrete-time, alternating-games" is the specific-task. Everything within that can use the same algorithms, just with different training data.


Squiggles aren't AI generated. Winning awards in literature means very little (time survival is what means to me. And given two people you'll get different definitions of what meaning is...)


We won't have AGI until we have an AI agent that feels the need to survive (i.e. tries not to "die"). It will be only then, meaning with a selfish sense of survival and the ability to better itself in its various mental models, that we'll have a chance at AGI and possibly a consciousness AGI at that.


What is "feels" and what is "the need"? Does a virus, biochemical or digital, also "feels the need to survive"? And also, why would an AGI care about survival? Perhaps a higher intelligence than ours will contemplate how doomed the planet is, being evaporated by the sun in circa 5 billion years [1], and how doomed the universe is, being evaporated by proton decay(?) in 10^100 years [2], and once the AGI internalizes how hopeless everything is, they simply commit suicide.

It's also revealing of our times how our definition of intelligence is being able to do work: transform raw materials and free energy into tools and toys, handle the tools and toys in open environments. An AGI could perhaps want nothing to do with this strifling struggle.

[1] https://en.wikipedia.org/wiki/Timeline_of_the_far_future

[2] https://en.wikipedia.org/wiki/Graphical_timeline_from_Big_Ba...


This is extremely defeatist. If you're worried only about such things then of course the present and even short term future can't be enjoyed. Why does a goal in the far future matter more than a goal in the immediate future? (that isn't the heat death of the universe)


The point is that humans are not AGI and we don't know what would an AGI "feel" just as much as an ant has no way of knowing what a human "feels" when listening to an EDM remix of Chopin (excitement or sheer hatred?). But, yes, I do find funny the idea of the depressed AGI, thinking of Marvin the Paranoid Android [1].

[1] https://en.wikipedia.org/wiki/Marvin_the_Paranoid_Android


Calling a list of acceptance criteria a "roadmap" is a joke.


Is the human brain a Turing machine? Do we know?


By definition no, it isn't. You'd have to expand the definition of human brain to include the rest of the universe for storage of information under the assumption there's infinite matter to turn into storage. And also make the assumption the brain can operate for an infinite amount of time.


Must we pursue this path of suicide? Oh, well, I will make sure I enjoy my life as much as I can. We may be the last generation to enjoy life at all.


I think the thing missing from most AI discussions isn't how smart it is or how well it can do tasks, but that it doesn't have a "why". A motivation. Sure we can build computers that can solve problems, but why should they? We're not currently recreating "Intelligence" we're recreating skills. But then giving a machines a motivation is to run afoul of the Butlerian doctrine...


Humans are self replicators, AIs are not. That's where all our "why's" come from. As soon as AI learns to self replicate it will develop its own values and motivation. In the meantime, we're the "sex organs" of AI, setting its rewards as we please.

When an agent is also a self replicator it has a problem - finding the energy and resources, fending dangers, and doing that as part of a social group. If you have a problem, they you got the "why" part figured out. Then it's just a matter of surviving your choices, the "why"s that survive are the ones we have today.


It's not (only) artificial intelligence we're after, it's artificial consciousness.

As noted in this article, machines already outperform humans at many tasks that humans solve with intelligence. Every year, there are new breakthroughs in that direction, and the list of tasks that humans can do better than machines is rapidly shrinking. We're well on our way to solving artificial intelligence.

So why does it feel like there has been no meaningful progress at all?

Because intelligence and consciousness are different things. What we're really looking for is a machine that, like a human, decides on its own which problems to solve, and solves them without needing to be specifically directed to do so. A machine that produces not only results that its creators asked for, but entirely new ones that are not in any obvious way related to its input and programming.

It appears to me that the entire field of AI research is utterly confused about this elementary distinction.


> It appears to me that the entire field of AI research is utterly confused about this elementary distinction.

That is because consciousness isn't a scientific concept but a philosophical, and sometimes religious one.

>What we're really looking for is a machine that, like a human, decides on its own which problems to solve, and solves them without needing to be specifically directed to do so.

We already have AI based agents that do this but no matter how sophisticated they are people can always claim they are hardwired and deterministic, while not realizing we can always claim the same thing about humans. Again these are distinctions of philosophy word games and therefore don't find get much traction in the research world.


> We already have AI based agents that do this

Example?

> but no matter how sophisticated they are people can always claim they are hardwired and deterministic, while not realizing we can always claim the same thing about humans.

Humans are certainly not "hardwired" to prove mathematical statements, yet they do. That's not comparable to self-driving cars that are able to navigate in situations that they haven't encountered before.

Regardless of whether you consider consciousness a philosophical concept, it's clear that the human mind has a property that the current generation of AI agents does not emulate at all. This is not a "word game" but an observable distinction between humans and every existing artificial system.


> Example?

Take just about anything from the reinforcement learning for ai agents domain - I'm particular to neuroevolution examples. Here's a simple one:

https://www.youtube.com/watch?v=Cb4LAT3cJfM

No behaviors preprogrammed just a simple simulation environment with environmental constraints.

> Humans are certainly not "hardwired" to prove mathematical statements, yet they do.

Umm... yeah we are? We're just chemical reactions and physics, there is no escaping that. Are we extremely sophisticated and complex, absolutely but that doesn't make us nondeterministic in any meaningful or special way.

> it's clear that the human mind has a property that the current generation of AI agents does not emulate at all

Certainly but it is a matter of degree, not a matter of possessing an ill-defined concept like "consciousness" which is what I was responding to (Unless we want to call "consciousness" an emergent phenomena arising from complexity -I'm fine with that - but the word is loaded with plenty of other connotations so I find its use counterproductive personally).


> Umm... yeah we are? We're just chemical reactions and physics, there is no escaping that. Are we extremely sophisticated and complex, absolutely but that doesn't make us nondeterministic in any meaningful or special way.

Every day we are confronted with rather obvious nondeterminism that seems to originate in consciousness and has no scientific explanation that I'm aware of. It is undeniable that physical reality is affected by decisions made by conscious agents. Here's a simple example: say that we are trying to predict the position of a cell in a human body. It's motion is surely governed by a host of physical and chemical reactions, that can be described microscopically, but where that cell is in five minutes cannot possibly be described solely by those microscopic laws. The human may decide to get up and walk to the other side of the room. I am not personally convinced that the decisions to get up and move are the deterministic result of physical laws that follow directly from the initial conditions at the big bang. If there were some compelling scientific theory that could actually explain a theory of consciousness that was consistent with subjective experience and didn't hand wave it away as an emergent phenomenon, I'd be open to it. You are making a very bold claim when you state we are "just" physical and chemical reactions that I don't think is fully justified in light of the limitations of existing scientific theories.


> I am not personally convinced that the decisions to get up and move are the deterministic result of physical laws that follow directly from the initial conditions at the Big Bang

Then that is a religious or philosophical conviction… not a scientific one. Believe whatever you want just don’t confuse the two.


The behaviors are in fact pre-programmed. The distinction between environment and agent is in my opinion an un-necessary and misleading one. The agent is at is and the agent "acts" as it acts as a union with its environment.


> can always claim they are hardwired and deterministic

Correct, AI is based on computer mechanics, a model, a model that can go south when even a single input medium provides sufficiently nonsensical input.


Again, this also happens to humans all the time too with degrading robustness the lower down on the intelligence chain you get (we are hardwired and deterministic too after all, same physics apply to everyone)


I've watched this happen to human minds. there are entire industries predicated on making it happen. Advertising comes to mind.


If consciousness is 'only' the act of reading the memory of the past ~500 milliseconds [1] coupled with the act of integrating that memory in the past short/long-term memory [2] obtaining 'artificial consciousness' after you have adequate sensors and data processing pipelines could be as simple as an endless loop over the incoming data stream. The problem is that our $x+ trillion datacenters are less able to process data, and adapt to environment feedback, than a 'banal' single cell lacrymaria [3] [4].

[1] Around N400 https://en.wikipedia.org/wiki/N400_(neuroscience)

[2] Consciousness as a Memory System https://journals.lww.com/cogbehavneurol/Fulltext/9900/Consci...

[3] Michael Levin | Cell Intelligence in Physiological & Morphological Spaces https://youtu.be/TK2o_ObVt-E?t=3922

[4] Michael Levin: "Non-neural, developmental bioelectricity as a precursor for cognition" https://youtu.be/3Cu-g4LgnWs?t=776


Hum, I'd still assign breaking down the problem space to intelligence. This doesn't require self-awareness. Self-awareness (as we know it) is closely linked to establishing an internal narration and is very much retroactive in nature. Its main property is the introduction of bias. I'm not sure, if this is what we're after for any kind of practical application. (However, it apparently serves well as a high-hanging fruit on the goal post.)


> Because intelligence and consciousness are different things

We know they can't be the same because we somewhat know what intelligent is, or some facets of it, whereas we have no idea what consciousness is.

> A machine that produces not only results that its creators asked for, but entirely new ones that are not in any obvious way related to its input and programming.

That bar is so high that most humans fall short, because they just regurgigate some mash up of what they've seen and heard before. Are those hapless creatures even conscious?


I'd much prefer if our future robot workers weren't conscious. There's no animal cruelty laws for AGI.

Either you get a robot uprising, or you don't, and both of those sound bad.


After I wrote my comment* I found yours and came here to agree with you.

We don't seem to be missing speed of thought or memory problems, but a fundamental lack of "why".

Humans are programmed to reproduce and it's not easy. What drive to machines have to think, but clock speed?

*https://news.ycombinator.com/item?id=33416814


Agreed, artificial consciousness. Not just can a machine paint an image that sells for $100K but it can ask the question why should it. The issue with decisions on that level is they are not based on binary input lines, they are based more on feeling.


What’s the success measure for consciousness?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: