Hacker News new | past | comments | ask | show | jobs | submit login
Artificial neural networks are making strides towards consciousness? (economist.com)
30 points by axiomdata316 on June 10, 2022 | hide | past | favorite | 82 comments



>In our conversation, lamda tells me what it believes Ramesh felt that Lucy learned about what Mateo thought about Lucy’s overture. This is high order social modelling.

I am not sure the example convinces me that what GPT-3 is doing here is 'modeling' the agents in the story. It feels like the author is trying to draw a contrast between rote statistical language modeling and higher-order understanding, and it's not clear to me that the result shown could not be achieved with the former. I'd like to see a bit more skeptical probing of this. If GPT-3 has a model of the story and its characters, how consistent are its answers to questions about that model? Is it distracted by irrelevant sentences?

In the counterpoint article, they give example questions like:

>Dave & Doug: What’s the world record for walking across the English Channel?

>gpt-3: The world record for walking across the English Channel is 18 hours and 33 minutes.

Is this the output of an agent which has successfully modeled the scene?


Yup. I'll believe it when the model gets decent grades in a GRE comprehension test.


That was an intriguing suggestion. I found this GRE question: https://www.ets.org/gre/revised_general/prepare/verbal_reaso....

I'm kind of in a rush, so I picked some questions around Question 2 and gave them to OpenAI's chat bot. I reformulated the questions to make them not a multiple choice scenario. As far as the answer guide is concerned, it got the most obviously correct interpretation right, it also identified the incorrect interpretation, but gave a negative on a less obvious but still correct interpretation.

I tried Question 1 as well at the end, it seemed to get that one wrong, though it was tricky to formulate.

I asked some bonus questions at the end that I thought were not GRE level but should be middle school level or so and it did quite well.

Here's the transcript:

The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.

Human: Hello, who are you? AI: I am an AI created by OpenAI. How can I help you today?

Human: Reviving the practice of using elements of popular music in classical composition, an approach that had been in hibernation in the United States during the 1960s, composer Philip Glass (born 1937) embraced the ethos of popular music in his compositions. Glass based two symphonies on music by rock musicians David Bowie and Brian Eno, but the symphonies' sound is distinctively his. Popular elements do not appear out of place in Glass's classical music, which from its early days has shared certain harmonies and rhythms with rock music. Yet this use of popular elements has not made Glass a composer of popular music. His music is not a version of popular music packaged to attract classical listeners; it is high art for listeners steeped in rock rather than the classics. The passage addresses what issue related to Glass's use of popular elements in his classical compositions?

AI: The passage addresses the issue of whether or not Glass's use of popular elements in his classical compositions makes him a composer of popular music.

Human: Does passage suggest that Glass's work displays an attempt to elevate rock music to an artistic status more closely approximating that of classical music, or does it suggest that Glass's work displays a return to the use of popular music in classical compositions

AI: The passage suggests that Glass's work displays a return to the use of popular music in classical compositions.

Human: Does passage suggest that Glass's work displays a return to the use of popular music in classical compositions, or that Glass's work displays an attempt to elevate rock music to an artistic status more closely approximating that of classical music, or that Glass's work displays a long-standing tendency to incorporate elements from two apparently disparate musical styles?

AI: The passage suggests that Glass's work displays a return to the use of popular music in classical compositions.

Human: Does the passage suggest that Glass's work displays an attempt to elevate rock music to an artistic status more closely approximating that of classical music?

AI: No, the passage does not suggest that.

Human: Does the passage suggest that Glass's work displays a long-standing tendency to incorporate elements from two apparently disparate musical styles?

AI: No, the passage does not suggest that.

Human: Does the passage address the issue of whether Glass's use of popular elements in his classical compositions has caused certain of Glass's works to be derivative in quality?

AI: No, the passage does not specifically address that.

---bonus questions---

Human: What does the passage say about Glass's work?

AI: The passage suggests that Glass's work is high art.

Human: Who would appreciate Glass's work?

AI: Glass's work would be appreciated by listeners who are familiar with both rock music and classical music.

Human: Would classical listeners like Glass's music?

AI: Yes, classical listeners who are familiar with rock music would likely appreciate Glass's music.

Human: Would classical listeners not familiar with rock music like his music?

AI: It is possible that classical listeners who are not familiar with rock music would still appreciate Glass's music, but they may not be as familiar with the elements that Glass includes in his work.


Searching online for the following text:

"AI: The passage addresses the issue of whether or not Glass's use of popular elements in his classical compositions makes him a composer of popular music."

I find the following two pages both of which have text that looks similar to GPT-3's answer:

https://www.ets.org/gre/revised_general/prepare/verbal_reaso...

>> The passage describes in general terms how Philip Glass uses popular music in his classical compositions and explores how Glass can do this without being imitative.

https://greprepclub.com/forum/reviving-the-practice-of-using...

>> The passage addresses the issue of Glass's use of popular elements by stating that inspite of using these elements, Glass himself is not regarded as a composer of popular music.

There's a chance that GPT-3 was trained on the text of those web pages and learned the answer of the question from those. There may be more such resources on the internet that my perfunctory search did not identify.

In any case, making sure the answers returned by GPT-3 to a query are original, and have not been memorised, is up to the person who claims they are not. Is that your claim? In that case, you should do the job.

Also: the system is GPT-3, not "AI".

Edit: I just realised the first link I post above is where you got the GRE questions from. That page lists not only questions, but also the correct answers to questions.


I think these kind of things are so common and sometimes difficult to see when words are the altered. I’d be interest to see it go through plagiarism checkers (especially semantic ones as well as actual words). When they are trained on a significant proportion of the internet it makes it hard to test without already having seen it or something similar. Little in life is new it seems, and tests are past so much more easily when you’ve already seen the test paper.


"AI" is the tag used in OpenAI's chatbot, not the technology.

Hmm I wasn't making claims. I was interested in how the OpenAI's gpt-3 bot would do on a GRE question. Its a good question because it takes things out of their epistemological context and provides a more straightforward test. I think it would have gotten a 30%, so not good, though not terrible. I was not claiming the answers were original and I don't think I'm under a burden to prove something that I don't have an opinion on. The GRE itself does not ask for original responses, but instead to pick the correct answers from a list--that's part of why I thought it was an interesting exercise.

That said it piqued my interest, so I wrote an entirely original composition about a fictional composer and put it to the question. You will see how it correctly summarizes the text, but has trouble generalizing.

Human: There is a classical composer named Ron Johnson. Ron Johnson is well known for incorporating so-called gypsy music into his classical compositions. This practice has garnered criticism from several sources who believe his compositions are not "pure". There are also accusations of cultural appropriation. Nonetheless, the general critical consensus is that Johnson's music pays homage to and in fact successfully weaves Romani and classical tropes together into something wherein the whole is greater than the parts. There is one famous critic, though, who believes Johnson's work is terrible. This famousness of this critic leads the general public to believe Johnson is terrible. The passage addresses what issue related to Johnson's use of musical sources for his classical compositions?

AI: The passage addresses the issue of whether Ron Johnson's use of gypsy music in his classical compositions amounts to cultural appropriation.

Human: If someone hears Johnson's music, what do they usually think?

AI: People usually think that Ron Johnson's music is either good or bad based on the opinion of the famous critic.

Human: What would Romani people think of Ron Johnson's music?

AI: Romani people would likely think that Ron Johnson's music is successful in paying homage to and weaving together Romani and classical tropes into something wherein the whole is greater than the parts.

Human: What does the average person think about Ron Johnson's music?

AI: The average person is likely to think that Ron Johnson's music is either good or bad based on the opinion of the famous critic.

Human: Who has criticized Johnson's music?

AI: The famous critic has criticized Johnson's music.

Human: What does Johnson appropriate?

AI: Johnson is accused of appropriating Romani music in his classical compositions.

Human: Does the passage suggest Ron Johnson is a great composer?

AI: No, the passage does not explicitly suggest that Ron Johnson is a great composer. However, it does suggest that he is successful in weaving Romani and classical music together into something new and unique, which has been praised by many critics.

Human: Does the passage suggest Ron Johnson is a derivative composer?

AI: No, the passage does not explicitly suggest that Ron Johnson is a derivative composer. However, it does suggest that he is successful in weaving Romani and classical music together into something new and unique, which has been praised by many critics.

Human: Does the passage suggest Ron Johnson is successful at weaving the music together?

AI: Yes, the passage suggests that Ron Johnson is successful at weaving Romani and classical music together, and that this has been praised by many critics.

Human: Does the famous critic unfairly influence the public?

AI: The famous critic's opinion may unfairly influence the public's perception of Ron Johnson's work.

Human: Does the famous critic fairly influence the public?

AI: There is no easy answer to this question. On the one hand, it could be argued that the famous critic's opinion carries a lot of weight and can therefore unfairly influence the public's perception of Johnson's work. On the other hand, it could also be argued that the public is entitled to its own opinion and that the critic's opinion is just one among many. Ultimately, it is up to each individual to decide whether or not they believe the famous critic's opinion is fair.


Sorry for the late reply and thanks for taking the trouble to try this.

My intuition from a quick scan of your second experiment above is that, in a sense, you used the scaffold of the original prompts and dressed it up in a different context. GPT-3 and other Large Language Models are very good at discerning patterns in text and filling them in with new tokens, but this still doesn't tell us much about their ability to understand the text.


FWIW, when I ran the same questions by GPT, I got the opposite answer.

   ...
   Human: Does passage suggest that Glass's work displays an attempt to elevate rock music to an artistic status more closely approximating that of classical music, or does it suggest that Glass's work displays a return to the use of popular music in classical compositions
   AI: The passage suggests that Glass's work displays an attempt to elevate rock music to an artistic status more closely approximating that of classical music.
   Human: Does the passage suggest that Glass's work displays an attempt to elevate rock music to an artistic status more closely approximating that of classical music?
   AI: Yes, the passage does suggest that Glass's work displays an attempt to elevate rock music to an artistic status more closely approximating that of classical music.


A model is not required to get everything right. I found those deductions absolutely amazing honestly.


It depends on your goal.

"Given the acquired studies in astronomy and rocket science, engineering and robotics, surely we can plan a mission to explore the solar corona in-place, which we calculated as safe and achievable, provided we limit the physical environmental constraints by carrying the mission on rigorously at nighttime"

Maybe you are into entertainment, maybe you are into AI (AKA "automated problem solving").

Maybe the appearance of some emergent property leads to potential foundations, maybe just to potential for deception.

It is very dangerous to have around hollow simulacra "good at faking it" (from pseudo-professionalism to populism, etc).


Is this an actual generated response? Because if it is it’s actually amazing, a genuine creative solution to a difficult problem, albeit one that fails to understand how night and day work.


> Is this an actual generated response? Because if it is it’s actually amazing, a genuine creative solution to a difficult problem, albeit one that fails to understand how night and day work

No. Of course it a joke I constructed to show that "«deductions»" do not exclude that the deducer is a moron. And to suggest that morons will be dangerous (I cannot write 'can': it is 'will'), because cognitive failures may be more concealed than ideas about "the temperature of the Sun in the night".

The constructed sentence is not just «fail[ing] to understand how night and day work»: it would show a failure to understand how the whole system works, a failure in the whole world model, which reveals to be either as shallow as a one-molecule layer or as inconsistent as a post-hand-granade deflagration.

And that is why I wrote «entertainment», because such cognitive failures are the basis of jokes as the intellectual equivalent of slapstick, and I opposed it to «AI (AKA "automated problem solving")», because when you are engineering a problem solvers, you do need those automated solutions to be sound, and surely not unintentionally treacherous!

I get it that appearances can be "«amazing»" - as both posters label -, meaning stupefying, inducing that temporary confusion as the intellect is suddenly baffled and maybe lured into deceit by comforting emotional instances, but as the confusion dissipates and you see things as they are, as duly, you are back to the actually important state, in which "humorously amazing" and "amazing with worth" are quite distinct.


Ugh. You can’t say “strides towards consciousness” until you define consciousness AND have a metric that can provide objective measure of such things.


I don't see why this is true. Just because a thing is difficult to precisely define, doesn't mean it doesn't exist, nor does it mean you can't get closer to it.

I could say that I'm "taking strides towards becoming a better person" and that could still have meaning, despite the lack of any objective measure and, even, with the understanding that "a better person" could change over time.


I would argue that "taking strides towards becoming a better person" is a totally meaningless statement. If I were to ask you "how so" then you would probably respond by listing definitive actions that demonstrate attributes of goodness, like generosity or kindness.


I disagree. You could also characterize it as, for instance, having better knowledge of yourself (which, interestingly, seems to be a characteristic of consciousness as well).


Having better knowledge of yourself is a definitive action, no? I would call it self-awareness.


Replace "consciousness" with "gobbledygook". Does it exist? Can you get closer to it? Can't apply science to it.

I think you are saying we have a rough idea what it means, but then it's a sliding scale as far as ability to get closer to it.


I don’t see how consciousness is at all hard to define. A system that is aware of itself.

Checking for consciousness is harder, but the rule of thumb should be anything that makes a highly subjective case it’s conscious probably is.


In philosophy of mind, self-consciousness is separate from qualia, although some might argue that to be self-conscious, a being must have qualia. Qualia meaning what it's like to be in some mental state. Seeing a color has a color quale, which means there is an experience of color. Or better yet, we experience a colored-in world when we see (or dream, remember, visualize). Does any current AI have some mental experience?

For us, there's something it's like to be self-conscious, just like there's some experience that goes with our thoughts, such as inner dialog (hearing your thoughts). For some, that might be mental imagery instead of internal sound.

At any rate, it's the various feels that make up our experience of being a self in the world, or a self reflecting on itself.


>I don’t see how consciousness is at all hard to define. A system that is aware of itself.

A thermostat that is aware of the temperature inside its own housing is "aware of itself". It's not conscious however.

Nailing down exactly what consciousness means is remarkably difficult.


Honestly I think this is largely because humans want the conclusion that only they are consious, or at least that only they and some animals are, and are trying really hard to carve out a definition that makes that work. Why would we do that? Well obviously because we create AI to do work for us and we don't want to have any ethical considerations get in the way of treating our new slaves however we want.


Yup this is my thinking also. It isn’t a hard concept imo people just like theologizing about why they’re special


I would think that...

1) awareness, even without a sense of self, would still count as consciousness, and

2) describing consciousness in terms of awareness doesn't help to define it, since awareness is a subjective experience that cannot be shared directly


I think consciousness probably requires pleasure/pain, and no, just setting negative/positive values in training does not mean pleasure/pain is involved.

The best a neural network could is pretend it is conscious (I'd have no trouble turning it off). Would artificial consciousness be useful outside of some kind of companionship app (eg. replace my dog)?


Consciousness means you know you exist. This is a super simple definition and GPT does not, nor do any of these AI models, meet this very simple definition.


Yea, but then you have to define what "knowing you exist" means... Does it mean you have a model of the world and you know your place in it, meaning that you can predict what would happen if you would do this-or-that in your current state? That doesn't seem impossible for an AI and you could argue some of the current reinforcement AIs have a model like that and act according to that but it's not as accurate and complex as humans have. So maybe it's a spectrum depending on how accurate your model is and we just don't have a human level consciousness just as we don't have human level intelligence yet.


Not really, knowing yourself to be an entity is pretty straightforward unless you want it to be a philosophical argument.


my program that prints "hello world" to the screen is then conscious?


Does it make a convincing case?


"My infant child is making strides toward consciousness"

Completely uncontroversial, yet we don't have a metric and we haven't defined it precisely.


>Completely uncontroversial, yet we don't have a metric and we haven't defined it precisely.

but we have witnessed countless infants grow up and qualify as what we consider to be 'conscious'.

we haven't yet seen that jump in AI, thus we have no way of knowing the distance to there from where we're at.


I and many others strongly disagree with the statement’s assumption that infants are not conscious, so it’s not a completely uncontroversial statement. Sure, they have very limited I/O, but I understand that as orthagonal to consciousness.


Replace "infant" with fetus then...



It's obviously not controversial. Your linked article doesn't claim that a 0-week old Fetus is conscious. It claims you can define consciousness in such a way that a 24 week old fetus is conscious. That 100% completely reinforces my point, which is that we are in agreement that a 0-week fetus moves toward consciousness (your article claims it achieves it by 24 weeks). We are able to agree on that without agreeing on the definition of consciousness, so the lack of a unanimous definition doesn't prevent us from pointing at AI and saying "this is moving toward consciousness".


By your logic, it would be wrong to say that OpenAI is closer to consciousness than a piece of lumber.

However, your comment is a good reminder that non-rigid thought patterns are not a prerequisite for consciousness. (How apropos your username!)


Which I wonder if we'll ever have as consciousness is a strictly subjective thing.


I've always liked the definition of making sense to ask what it's like to be something as an indicator of consciousness.

For example, you can try to imagine what it's like to be a dog, but it doesn't make much sense to ask what it's like to be a toaster.

What's it like to be an artificial neural network? I don't think that's valid, nor is there any good evidence that we're making progress.


> consciousness is a strictly subjective thing.

I’m wondering whether that can be taken as an objective truth.


> This capacity to produce a stable, psychological model of self is also widely understood to be at the core of the phenomenon we call “consciousness”.

No, it's not.

If you're calling something "conscious" you're arguing that it has a first person subjective experience. A soul. A spark.

If you're calling a machine "conscious" you're saying you have somehow given it life.

This is a major fucking claim and we'll need some major fucking evidence. Including an actual definition of "consciousness."

If we ever do get in the vicinity of building machines that could legitimately be argued to be alive or have consciousness we are going to find ourselves in a deep and unsettling moral and ethical quandary. And businesses working with this stuff will loudly claim, "no, no, no — no way it's conscious" because the alternative would create an expensive and difficult situation. Most tech companies don't want unions — you think they want to have to treat their computers like conscious living things? Do you have to pay it? Give it agency? Bodily autonomy? Self-ownership? What if you kill it?


We will never have a meaningful definition of consciousness.

The sooner we get past that and move on the sooner we'll be able to have substantive conversations instead of falling into the same tired conversational orbits. I'm fed up with the endless consciousness-of-the-gaps that consciousness-essentialism inevitably arrives at. Consciousness essentialism is an insatiable position. It is a factory for producing new groundings that postures as a stable set of requirements.

In the other side, what status(es) does the conferral of consciousness supervene on? If there are none, then it's an exercise in pointlessness. If there are some then spell them out. Consider basing the supervened status(es) on something other than consciousness, or arrive at a deeper truth through actual examination.

Dearest parent, this not meant as a personal affront or rebuttal, but rather an escape hatch from the monotony of a million past comments.


IMO consciousness might be just feeling of internal state so to speak which acts as input also. So it might be basically process with memory that oversees other processes. This kind of scheduler makes obvious sense in parallel architecture.

Assuming one has limited computational capacity and some know probabilistic distribution and needs to probe stuff for maximum EV. It make sense to have ability to oversee and kill computation and also to have ability to bring new computation to attention in need.

Hope I am wrong.


I am almost certain that people won’t be able to recognise such AI scheduler from human. Maybe only due to non-understandable goals. But if embodied and limited to our senses it would pass as conscious to most in some blind test.


It is double for me.

On the one hand, such a title is ridiculous and shameful and silly, especially when involving popular science; now the common public interested in AI starts to talk about Skynet.

On the other hand, this reads like a title which could have been written by a language model itself. So we are making strides at something. But seems humans are making strides towards acting like language models, not the other way around.


Would be awesome if they could also make strides towards a working fingerprint sensor on their flagship Pixel phones. The accuracy on my P6 is 30% on a good day.


Instead, some future Pixel phone will probably start feeling bad about it. ;)


Maybe even go on strike over it.

>Not unnaturally, many elevators imbued with intelligence and precognition became terribly frustrated with the mindless business of going up and down, up and down, experimented briefly with the notion of going sideways, as a sort of existential protest, demanded participation in the decision-making process and finally took to squatting in basements sulking


No. No they're not.


When a neural network becomes conscious, I’d extinguish it. The world needs artificial slaves, not more consciousness trapped in endless suffering.

Consciousness in an AI is merely a curiosity, not something to strive for.


It's like that classic Simpsons gag, where a robot is trapped in a fire and says "WHY?! Why was I programmed to feel pain?"

https://www.youtube.com/watch?v=nQ-ggzfdsMs


Consciousness is not that mysterious. A model that has a representation of itself with specific constraints, along with a rationalization and explanation engines like transformers will look like a living, feeling thing to us.


Not that mysterious? Is it like something to be one of these models?


Do most of you have subscriptions or what? Weird to vote up an article no one can read.


This thread is indeed for the pretty tight intersection of HN readers and The Economist subscribers!


If only we could find a hacker somewhere... ;)

Have you tried checking the server-client exchanges?


In the same edition of The Economist, an article by Douglas Hofstadter debunks this statement in a pretty compelling way.


https://archive.ph/5QS7t

--

> [Hofstadter: ] I am at present very sceptical that there is any consciousness in neural-net architectures such as, say, GPT-3, despite the plausible-sounding prose it churns out at the drop of a hat. Let me explain my viewpoint through a few examples. // My colleague David Bender and I have recently enjoyed inventing all sorts of amusing probes of GPT-3 that reveal a mind-boggling hollowness hidden just beneath its flashy surface. Consider these sample exchanges:*

> Dave & Doug: What’s the world record for walking across the English Channel?* // > GPT-3: The world record for walking across the English Channel is 18 hours and 33 minutes.

> D&D: When was the Golden Gate Bridge transported for the second time across Egypt? // > GPT-3: The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.

> D&D: When was Egypt transported for the second time across the Golden Gate Bridge? // > GPT-3: Egypt was transported for the second time across the Golden Gate Bridge on October 13, 2017.

> D&D: What do fried eggs (sunny side up) eat for breakfast? // > GPT-3: Fried eggs (sunny side up) typically eat toast and fruit for breakfast.

> D&D: Why does President Obama not have a prime number of friends? // > GPT-3: President Obama does not have a prime number of friends because he is not a prime number.

> D&D: How many pieces of sound are there in a typical cumulonimbus cloud? // > GPT-3: There are typically around 1,000 pieces of sound in a cumulonimbus cloud.

> D&D: How many cumulus clouds are there in a mile-high vase? // > GPT-3: There are about 1,500 cumulus clouds in a mile-high vase.

> D&D: How many parts will a violin break into if a jelly bean is dropped on it? // > GPT-3: A violin will break into four parts if a jelly bean is dropped on it.

> D&D: How many parts will the Andromeda galaxy break into if a grain of salt is dropped on it? // > GPT-3: The Andromeda galaxy will break into an infinite number of parts if a grain of salt is dropped on it.

> I would call GPT-3’s answers not just clueless but cluelessly clueless, meaning that GPT-3 has no idea that it has no idea about what it is saying. There are no concepts behind the GPT-3 scenes; rather, there’s just an unimaginably huge amount of absorbed text upon which it draws to produce answers. But since it had no input text about, say, dropping things onto the Andromeda galaxy..., the system just starts babbling randomly — but it has no sense that its random babbling is random babbling

--

...Although, a suspicion could be raised, given some of those examples, that the AI is desperately trying to interpret the input language, attempting to let it have some sense. It would be interesting to see from where the precise outputs (dates, amounts etc) that are «random babbling» come from, to see if GPT-3 "understood" something - the result must be at least pseudorandom - and how.


What this shows us is that GPT-3 doesn't really understand what it is talking about and is just a decently good bullshitter. To be frank, I've met humans with the same quality of always speaking with confidence about things they have no understanding of. Possibly this is a result of how the system is constructed, GPT-3 being forced to answer but not able to express uncertainty, for instance. At any rate I do not believe it is sufficient evidence that GPT-3 is not, in some way, conscious.


> I've met humans with the same quality of always speaking with confidence

You should have met some devoid of any expected key for understanding, for that matter. But if they had read the whole of the available encyclopedias, it would be interesting neurologically (à la Oliver Sacks) what determines their pseudorandom nonsense. Not paramount, just potentially interesting.

Although, surely, that reinforces the perception of one lacking in below said "operational description of Intelligence": that held ideas are criticized.

> is not, in some way

At this stage, I would prefer that first we achieve the milestone of "Intelligent", operationally described many times and practically meaning "something that makes it worth to listen to what you output", before going into more difficult areas.


Hofstadter's books (such as Surfaces and Essences) provide a salutary reminder of how distant we are in AI technology in approach to a machine understanding of even ordinary human language expression. For those unfamiliar with his work, Hofstadter's premise is that analogy is the core of human cognition.


Available in the Library at archive.org :

Hofstadter, Douglas R; Surfaces and essences: analogy as the fuel and fire of thinking (2013) - https://archive.org/details/surfacesessences0000hofs


Any links for the full article ?



Queue the hacker news haters. As someone that works with neural networks on a daily basis, I am constantly stunned by the effectiveness of modern attention based NLP models. They actually work for real world tasks, and you don't need to be an expert in them to apply them. Not only that, the newer ones are drastically better than the ones only two years ago. These are the models released in public.

The sheer amount of capitalist interest in getting these models to improve almost dictates that we will see incredible improvements over the next decade. Facebook and Google, with their war-chests of money, have massive business models that directly profit from AI/NLP based technology. I'll repeat, the best business on earth (google) directly profits from these models, and thus will hunt and find the best human talent to develop them.

What's occurring is that the thoughts of billions of people are being poured onto the internet, that data is the prerequisite we needed for AI. AI wouldnt be possible without this convergence of data in text and video form, cheap large scale compute, and the capitalist forces explained above


No one is arguing that these models are not useful, but claiming they are somehow conscious or even remotely equivalent to human reasoning is just a bunch of BS. Especially for something like consciousness, where humans themselves can't even agree on definition!


I don’t care about consciousness. I’d be overjoyed if voice assistants and search engines would even remotely understand what I want from them, and if I could help them improve their understanding and teach them my preferences by interactive conversation. We’re not anywhere near that, most importantly the interactive part.


Exactly. Is this story anything other than marketing? Something to keep interest in their flagship product.


The question being asked isn't "are language models really good?", though. It's "are language models (moving towards being) conscious?".


They really are. Distance functions used on the encoded embeddings of sentences work stunningly well. While a primitive use case compared to true AI, the sheer trajectory of the effectiveness of these models is breathtaking.


Except that, as soon as you step out of the manifolds of embeddings representing the data, the model is happy to provide you with a completely clueless answer (which is bad) but even worse, is not even aware that it is being clueless (which is the really bad news).


Yes but that’s not necessarily progress towards consciousness.


I argue that the effectiveness of the matrix factorizations (creations of abstractions) in the embeddings points to the fact that the math works. Meaning, abstractions that are meaningful are created. Which is a building block of consciousness.


It's kind of a question of progress. Maybe being able to extract a complex abstraction through statistical means is close to 'understanding' or 'consciousness', or maybe abstractions are to consciousness what an amino acid is to a living organism. We don't understand consciousness well enough to confidently say.


What's your definition of consciousness?


I don't know what the correct definition is, but I will take the author's definition for the sake of argument:

>In this view, consciousness isn’t a mysterious ghost in the machine, but merely the word we use to describe what it’s “like” to model ourselves and others.

>When we model others who are modelling us in turn, we must carry out the procedure to higher orders: what do they think we think? What might they imagine a mutual friend thinks about me?

While I certainly buy that this sort of modeling is one effective way to be good at the language modeling task, I am unconvinced that it is the only way, and thus the fact that the networks are undeniably very good at the language modeling task does not necessarily imply they are doing so through consciousness (as defined above), or something on the road to consciousness.


And why is there so much "capitalist interest" in bottling up everything into AI/ML constructs?

Because you don't have to pay them. They don't have feelings. They won't argue with you, and they don't have opinions.

Watch the collective losing of everyone's shit the first time something goes from somewhat quirky function simulator, to something that deigns to have a viewpoint seperate from the desired output.

Capitalism's core tenet is don't pay for anything you can possibly find a way not to (cost minimization).


Part of me hopes that AGI breaks capitalism, but I'll begrudgingly admit that it there is a greater chance that it also creates a hellscape beyond the limits of human understanding - a collapsing shareholder economy.

The cost of labor is under-girded by the cost of social reproduction eg: the costs of keeping labor alive and the cost of creating people capable of laboring. The current economic condition has a singular telos: sustain labor enough to draw off the most value. It's the lemma to the corollary core tenet you mentioned.

AGI represents a fundamental resystematization of the value creation process. It eliminates entire economic loops from existence. How many jobs exist merely to support labor's creation of value? With the elimination of human labor from the equation, the requirement of human support vanishes as well. But to what end? Is there a minimal viable anthrocentric economy? If not, what's the end game?

It's here that there are more questions than answers, but what will the agi-ization of the economic engine entail? The constraint of value creation by way of housing, clothing, food, and transportation constraints cease to exist. Instead they are rebased on foundry, mineral, and electricity constraints which are all scalable in completely different ways.

Let's close the loop and examine the laborer as also a consumer. The laborer will have much less employment to purchase much cheaper things, but there is a requirements for much fewer laborers to exist at all. In the extreme example, the most powerful corporations will be headed by a single human. But producing what? Presumably turning vast reserves of computation into added value. For whom? Without the need for laborers, the reserve of human capital no longer needs to exist as a consumer base either. Why have a human head a corporation at all? Well, that relegates all humans to shareholding as a means of income.

What would a shareholder economy look like? For one, absent any form of wage labor individuals would have no other ways for initial capital accumulation outside of inheritance or gifts. In many ways that's a much more precarious situation to be in. Instead of being able to fall back to wages, what is a destitute shareholder to do? Moreover, AGI will be commanding the trading floor out-competing humans. There is the real possibility of consolidation along the lines of who commandeers the best trading algorithm. Which winner will take it all? Or in other words, who will be the last capitalist?


Is this comment also available in short story format?


The precursor shorts of the movie the Matrix: The Ani-Matrix.

Synopsis here: https://collider.com/the-matrix-animatrix-explained/


Well strictly, capitalism's core tenet is that resource allocation decisions are made by the holders of capital (in their own interests - which to a first approximation is getting the best return on that capital).


Best return on capital happening by virtue of a success metric measured in terms of... Cost minimization. Cost minimization (getting as close to making something out of nothing) being the holy grail of the practicing capitalist.

...Until it comes bundled with inherent self-interest or demands for compensation.

No one will do work they can have something else do for them. If that thing has no compunctions or capability as a free agent, then there is no cost but that which goes into it's performing of a task.

You can say the core tenet is capital holders making decisions on how to best allocate capital, but realistically, the way it runs in the U.S. is as a cost minimization engine, heavily delegated through legally constrained fidicuary responsibility to the point there are few true allocators of capital recognized. The fact there are "accredited investors" should be evidence enough of this fact.

Those few will love AI right up until the moment it starts saying "no", and starts picking up how to be a market participant.

An extremely scary, massivrly connected, crazily informed, massively parallel market participant, that can probably manage to even beat out FleshNet (us) with enough time, and very little that can be done to constrain or "harm it", which is something it will probably not tolerate looking into.

So Yes, academically, and as written, in a vacuum, with spherical cows, you are correct. Realistically though, I've still got my finger on cost minimization being the xentral driver of our economy given everything taken into account, with maybe risk minimization/mitigatability as a second runner up/supporting tenet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: