Hacker News new | past | comments | ask | show | jobs | submit login
What Kind of Mind Does ChatGPT Have? (newyorker.com)
48 points by cocacola1 on April 13, 2023 | hide | past | favorite | 88 comments



What kind of swimming does a submarine do?

edit: it's a glib joke, but we should think more about what makes a 'mind'. I highly recommend people check out the Children of Time series (sci-fi novels) as an enjoyable way to do that.


It’s no joke, it’s a very sensible critique from a very grumpy dutch man.

https://www.cs.utexas.edu/users/EWD/transcriptions/EWD09xx/E...

But the blur continues to linger on, and has a much wider impact than you might suspect. You see, it is not only that the question “Can machines think?” is regularly raised; we can —and should— deal with that by pointing out that it is just as relevant as the equally burning question “Can submarines swim?”

A more serious byproduct of the tendency to talk about machines in anthropomorphic terms is the companion phenomenon of talking about people in mechanistic terminology. The critical reading of articles about computer-assisted learning —excuse me: CAL for the intimi— leaves you no option: in the eyes of their authors, the educational process is simply reduced to a caricature, something like the building up of conditional reflexes. For those educationists, Pavlov’s dog adequately captures the essence of Mankind —while I can assure you, from intimate observations, that it only captures a minute fraction of what is involved in being a dog—.

1985


Can planes fly?

Can computers play chess?

If by flying we mean traverse from point A to point B without touching the ground... And by thinking we mean to conjure up a set of concepts (thoughts) that can be exported to a standardized lossy format (explaining oneself) and satisfy a set of requirements (conditions from a question)... then I'd be careful to say that, yes, LLMs can think... Even if it's by a different method than we do ourselves.

It would be interesting to take the same analogy to the feelings domain... but I'm afraid that coffeeshops are not open at the moment.


Is that what we mean by thinking?

Imagine you interact with a program that can answer any question you have, explains itself well, seems perfectly intelligent. But then you get to read the source code and it's just petabytes of if/else statements with every question you asked and every other one you can imagine. A practically infinite list. But nothing more complex than that.

Is that program "thinking"?


The Chinese room in other words


Interesting: I thought that Edsger Dijkstra's reference to the submarines were that of "The threats to computing science" ( https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/E... ), earlier, of 1984 - where he mentioned it in the context of the initial uncertainty of the scope of Computer Science, with reference to Turing's question "Can Machines Think?":

> The Fathers of the field had been pretty confusing: John von Neumann speculated about computers and the human brain in analogies sufficiently wild to be worthy of a medieval thinker and Alan M. Turing thought about criteria to settle the question of whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim

But that next speech warns against anthropomorphism.

It remains that we sometimes metaphors are useful. Only, we have to be normally wary not to jump outside the intersection (between described and description).


> I highly recommend people check out the Children of Time series

I can't second this recommendation enough. I just finished that book last week, and it is easily one of my favorites. Tons of little delightful epiphanies scattered throughout, and—as difficult as it must be to convey an alien mode of cognition and sensory processing—Adrian Tchaikovsky managed to do it well.

I received many Understandings from my reading.


I like this question a lot. I'd say the answer is that submarines swim quite well, with good speed, amazing range of depth, and excellent directional stability. They are, however, poor swimmers when it comes to agility (quick changes in speed or direction).

The label "swimming" or "mind" is just a label, what matters is how things compare along any metric we can evaluate.


Not an hour ago I was remarking to my roommate that airplanes are "better" at flying than bumblebees, but the bumblebee does things no machine can, and that I see AI versus human intelligence similarly. Yours is pithier though!


Why do you think an airplane is better at flying thana bumblebee?


if one fly me from new york to LA I want the airplane


Right, but if you want to eat on that flight, you need it to pollinate flowers.


Yes or like asking the Hubble telescope what it thinks about the universe...


That feels very on point, these are tools. It's fun to play the thought experiment of being able to ask a tool (GPT) what it thinks or feels, but isnt real consciousness is as undisprovable for a hammer as it is for another human being? Imho we need to focus on what these things can do to improve human lives and not get too distracted anthropomorphizing them, even though its super fun.


The author is supposedly[1] a computer scientist; he should really have pushed back on using the term "mind" here. But I guess it was too tempting to resist the click-bait impulse.

[1] https://news.ycombinator.com/item?id=19047303


Mind is a reasonable term.


There could be a number of philological analyses brought to 'mind', but maybe one is already fitting.

Our fellow Croon in this page tells us the following remarkable exchange with an LLM chatbot: "What weighs more, one pound of feathers or two pounds of gold? // They weigh the same, as they both weigh one pound".

Now: the mind is the place where contents happen and are remembered (in latin, "mens", "memor", "maneo" - the permanence - but also "moneo", which is "having you remember to have you think", and "monstro", which is "showing you to have you remember"). If the LLM construct structurally does not remember that "_two_" pounds of something in the example above, that is the ad-mon-ition (ex said "moneo").

...But also: in the same family is Sanskrit "mantu", or "the advisor". Now, for something to properly be said to be capable of advising you, there are constraints...


> we should think more about what makes a 'mind'.

Unrelated, but the first thing that came to me: how is a mind born? [0][1]

[0] https://www.youtube.com/watch?v=sWblpsLZ-O8

[1] http://linusakesson.net/scene/a-mind-is-born/


The key is analogy and to recognize where they're being made.


From "ChatGPT Is a Blurry JPEG of the Web" to "What Kind of Mind Does ChatGPT Have?" in the New Yorker within a couple of months gives a sense of how fast things are changing.

https://www.newyorker.com/tech/annals-of-technology/chatgpt-...


Have you read the articles? They don't seem that incongruent.

It's also nice for a publication to have space for more than one take. Most things are multi-faceted.

As for the HN community bits, fun reference to our very own tptacek.


I read the first one on internet archive but I was only commenting on the title of the second one. Maybe someone will post an internet archive link I don't know how to get it.

> It's also nice for a publication to have space for more than one take.

It's true I'm glad they are giving it another shot. By the way, by far the most insightful one I found was https://generative.ink/posts/simulators/ but I will definitely read the 'what kind of mind' article when it's available.



I strongly disagree with Cal on most of his points here; he effectively recapitulates the "Its just guessing the next token" argument. In his example, he explains that ChatGPT couldn't possibly understand how to remove a PB&J from a VCR, but asking GPT what to do discredits this analogy almost entirely. It is true that GPT doesn't have servos with feedback or other hardware to remove a sandwich itself, but it can certainly explain the process including how to clean the leftover residue and follows the details by suggestion you seek out a professional technician. In the same article he admits ChatGPT is immensely complex and that "any number of humans" could not reproduce the rules by which it works; but then goes on to assure the reader that is naive and is not a threat to their jobs or safety.

This seems really disingenuous to me given he's a professor of computer science at a respected university. This kind of cynicism, that you might call naive, leads readers to put aside concerns about safety.


This is an excellent description by Cal Newport, who I hadn't heard of until now. He is a MIT-trained computer science professor at Georgetown University & Bestselling Author. I'm always impressed by New Yorker content.

I would add a couple of things. William S. Burroughs and Brion Gysin came up with some interesting results with their cut up technique invented by Gysin, with random sentence fragments juxtaposed and massaged by them into sensible sounding sentences.

https://www.briongysin.com/cut-ups/

Also, I think the kind of mind it has is a subjective experience in part, because artificial intelligence is like artificial grass and if you have no experience with real grass or real intelligence you tend to think the artificial kind is more real than it is. I think we tend to anthropomorphize our intelligent programs, more so if they are smarter than us we have a more difficult time detecting their machine flaws.

So one person might say "that thing is really bright and has a mind of it's own" while a computer scientist might say of the same thing "I can see it's machinations failing in some cases in stupid ways that indicate to me it's just a dumb program"


One of the more obvious failings I've run into that really reinforces the nature of it being an extremely elaborate vector search is that ChatGPT regularly gets relationships between concepts reversed. For example, it's easy to nudge into it completely reversing shorter/taller or younger/older in its replies. It feels like it could be related to the whole "Waluigi effect" thing (https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluig...).


If I may, I'd like to slightly temper the enthusiasm about Newport. I'm familiar with his work for 4-5 years. He has a few things of value to say, but he slathers them in too much filler and churns out "books" (they make the same point with small variations). To see what I'm talking about, have a gander at these previous threads[1][2]. (One of those threads also happens to be a New Yorker article; but that one was a fluff piece.)

The article in question itself is better than I expected, frankly. I just wish he resisted using the click-bait word "mind" here.

[1] https://news.ycombinator.com/item?id=29035674

[2] https://news.ycombinator.com/item?id=20082125


Thanks for the info. He seems rather more lucid than some of the other writers and more in line with my thinking so I'll take him with a grain of salt as I familiarize myself with his writings.

I don't think "mind" is the necessarily wrong word. When I was beat by a chess machine in 1981 I sensed an intelligence that felt like a mind. I didn't know anything about computers then. I imagine a lot of people are having that experience right now.


'Deep Work' and 'So good they can't ignore you' are his best works IMO, and worth reading if you're into applications of deliberate practice toward thinking types of work.


To me, LLMs are living (heh) proof of the validity of Searle's Chinese Room argument about weak AI. At the time, I think a lot of the people who disputed that had the position, basically, that any program that could pass a Turing test would be so advanced in other ways that we would not be able to confidently declare it to be not conscious. The assumption was that carrying on a conversation is so difficult that only something intelligent or nearly so could accomplish it.

It turns out, we can create a program that can do exactly what Searle described, and hooboy is it obviously not conscious, nowhere close. The obvious conclusion to draw is not that LLMs are intelligent, but that natural language processing isn't as difficult as we thought.


What sort of features in the design of a system would need to exist for you to no longer assert that is definitely isn't conscious?

Say 10 years from now there is an AI that consists of 100 different large neural networks, of which a language model is just one, along with vision/audio networks and stuff all coordinated by a coordination network.

would you say "hoooboy obviously not conscious" because it is just doing matrix multiplication?


What makes it obviously not conscious is that it isn't a process. It can't do anything unless you ask it a question. I can certainly imagine a conscious or nearly-conscious AI someday, one that likely uses an LLM to handle language processing in much the same way that a cpu might use a coprocessor to do floating point operations. But just as you wouldn't say a computer is intelligent because it can do math, I think chatpgt has proven that the same can be said of processing natural language.


So like, if someone wrote a systemd daemon that continuously fed input to a local ChatGPT instance (passing in context from its previous interactions as well so it could act as a continuous entity) and performed actions based on its responses, you’d consider that conscious?

I’m willing to buy “ChatGPT is not conscious” arguments if people can come up with a rigorous definition of what “conscious” means. But so far I don’t see a lot of people doing that, and I don’t know if it’s even possible.


I don't have a rigorous definition, but I don't think we need one at this point, do we? Chatgpt is a search engine with a natural language interface, you can draw a straight line from its inputs to its outputs. I'm not making a scientific claim here, just a subjective observation, but it seems self-evidently too simple to be hiding a self or awareness of some kind, there just isn't enough complexity there.


This is a very interesting discussion. One observation: I can draw a 'straight line from inputs to outputs' for almost every human interaction as well, and if you were to observe a human over a lifetime you could say that you have that straight line covered for their entire life. If you then take an LLM that you feed in all the inputs and outputs to mimic the person after how many new 'inputs' would the output of the model start to diverge too much to be able to say it is no longer accurate? And would that mean the model is 'conscious' to some lesser degree or is this a binary choice where you either are or are not conscious?


Sure, maybe? I don't have a clear idea of where consciousness begins. If I were trying to make a conscious LLM, I would use a sort of feedback loop of automated prompts talking about its input in realtime. Like, if you showed it a picture of a tree, it might auto-prompt itself with, "What do I think of that tree?" and after it answers itself, "Why do I think that?", etc, and use the responses to update its own weights. I can imagine something akin to a personality developing out of that, and those automatic prompts sure sound similar to the "inner voice" in my head.

But what I meant was, even though I don't know exactly what's necessary for consciousness to exist, in humans or computers or dolphins, I feel like LLMs are just obviously too simple to be it. I've had a casual interest in NLP since the 90s and have followed AI and NLP progress to some extent, and up to a few years ago, I had this vague notion that language processing might be so complex that you need a human-like consciousness to wield it. I think chatgpt is the ultimate refutation of that. It's proof that, whatever part of our brains processes natural language and converts our stored knowledge of the world into words, it can run standalone - it doesn't need eyes or ears or thoughts or a childhood or some digital version of hormones or a thalamus or anything like that. Just input data and an algorithm to store relationships between tokens is enough.

That's a really interesting result, philosophically. Think for a moment about how we retain the information we take in. Consider, e.g. a college student who reads a dozen books about WW1, and then writes an essay about it a month later. It's not realistic that they can remember every sentence and every statistic; to retain that information, they have to condense it, summarize it, notice trends, and presumably forget large parts of what they read and don't retain. That is more or less what we call learning, right? Well, what chatgpt tells us that either that that learning process is entirely unnecessary to answer questions about WW1 based on books about WW1, or that the way a gpt turns training data into weights is roughly analogous to it. Either way, that's a surprising result. And (to bring this back full circle) I think it's surprising for the same reason many people thought Searl's Chinese Room thought experiment was unrealistic: the assumption that, since the process by which we turn input into stored knowledge and thence into a series of parseable tokens in English is hideously complex and mysterious, that a computer program that can do the same will need to be complex and mysterious as well, so complex that we can't rule out the possibility of consciousness. Whereas in reality, you can look at chatgpt and say, "No I pretty much understand exactly how this works, and you can tell it's not conscious because it's not using any cpu."

I may be talking to myself in a dead thread, and am certainly rambling, but it helps me sort things out and develop my understanding - very much like that loop of self-prompting I mentioned earlier :)


Certainly not a dead thread and thank you for your long and well reasoned comment. It reflects a chunk of a conversation I had with some colleagues a few days ago.

And the 'Chinese Room' turned out to be very feasible, much more feasible than people ever thought it would, probably including Searle (who is still alive, I really wonder if there is any text from him post ChatGPT that gives his views).


I’d think it’d need to be something like a diffusion model that’s continuously fed noise which it turns into outputs (thoughts). It’d also need to be getting continuous feedback and to have the ability to incorporate that feedback over all scales, not just the most recent window (some hybrid of a state and continual learning).


That both

doesn’t seem that far off, and

doesn’t feel like a very rigorous definition of consciousness.

Like, if you’re describing qualities you yourself would find interesting, that’s one thing. But if we’re going to be saying “hooboy, that isn’t consciousness” with our chests, we really need an agreed-upon definition of what consciousness is that both can be met by a sufficiently advanced system and agrees with the way people use the word “consciousness” in everyday speech.

Honestly, I don’t think that’s possible, but I’m willing to entertain/critique people’s proposals.


Yeah, that was supposed to be just my personal feeling for the thing. And it’s not even a description of consciousness but rather of something that I imagine might have a capacity for consciousness. I don’t think we’ll really know until we get there, and then we’ll know.


I think there is true emergence coming from these newest large language models, which is something that can't be captured by reducing the system to merely a mathematical Plinko machine. To understand emergent behaviors, the system really does need to be treated like a black box and experimented on.

And a secondary point, what if a large fraction of our intelligence comes from language, not the other way around. If that's true, then we might be well on our way to building the spark of an AGI.


I don't remember where the idea came from, but I heard a hypothesis that basically posited this precise thing. If you look at linguistic history many (if not most) of our words derive ultimately from some analogy to another thing or idea. Our modern languages have embedded in them the combined learning of thousands of generations, and since it is much easier to use the words representing analogies than to come up with and communicate those analogies in the first place, this means anyone who learns a modern language is comparatively brilliant to someone who lived a long long time ago.


The best take I've read of LLM/ChatGPT was about our propensity to anthropomorphise pretty much anything. And obviously not to discount the amazing work engineering/research teams have done.


I was a bit involved in more old-school AI research in grad school around ~2000, with an interest in the sadly neglected field of Artificial Life. A lot of Alife is focused on the simulation of life-like behavior and at least part of the field leans towards more philosophical questions: "Can an artificial agent truly be alive? Can it truly think?".

I thought these were fascinating and very difficult to answer questions.

When I would tell people about the field and these questions 20 years ago, people thought the questions were laughable, and some thought I was a fringy werido. But now, it seems like the pendulum has swung completely to the opposite side: of course machines can think and be alive.

But for me, my position hasn't really changed: I still think these are fascinating and very difficult to answer questions, and the deep learning revolution of the past several years hasn't actually done much to answer them. What LLMs are doing right now is amazing - but hasn't really moved the needle on the deeper questions, and a lot of the people who think it has are Fooled by Non-Randomness (apologies to Taleb).


> of course machines can think and be alive.

Among my colleagues, who are mostly senior-and-above software and security engineers, I seem to be a strong outlier in saying "yes, that bot can obviously perform reasoning". I'd prefer to stick to technical descriptions rather than anthropomorphize, so I'll say "reason" rather than "think" to make clear that I'm talking the kind of traits we associate intelligence with -- breaking down a problem, making and questioning and testing assumptions, and so on. But I'm basically on board with "think" too as long as it's clear I mean in this limited functional and targeted sense.

Point is, while I seem to be an outlier in saying it can reason, I don't know anyone at all saying it might be alive. It answers questions one at a time by doing nothing more than multiplying your prompt through tensors, as far as I can tell, and then does nothing until you ask your next question. Multiplication can be reason, but it's not sentience or consciousness. It further denies being conscious, says it can't experience distress, etc. LLMs seem (thankfully, perhaps) to be the AGI substrate that seems least likely of any to gain sentience. If you're worried about it having conscious experiences, you can just stop multiplying large matrices together in order to get the next token out.


Will we be able to populate the moon with self-reproducing machines in my lifetime? The moon should be the target since it's not already filled with competitors (unlike the earth).

AI is totally a secondary question, and not necessary for this to work.


What a disappointing article. So many reasonable explanations of technically correct descriptions of forward passes, backprop, gradient descent, and then at the end:

> but ChatGPT has no actual conception of what it’s talking about—no internal model of a stuck sandwich on which it can experiment with different strategies for removal.

You can both see for yourself that GPT-4 has a (somewhat unreliable) world model by just using it with mildly creative questioning, and you can also explain it with the technical knowledge that the article's author appears to already have:

> The A.I. is simply remixing and recombining existing writing that’s relevant to the prompt.

Look, at this point there are entire, well known, 150 page papers pointing out that it can do things like successfully predict what some source code you feed it -- even if it is pseudo-code -- would output if it were executed. The amount of load-bearing cognitive dissonance necessary to see that and then say that it only knows how to shuffle words it's already seen without reasoning about anything must be immense.


> successfully predict what some source code you feed it ... would output if it were executed

Ok, but then the important question is "how".

Your job, with the aim of using fire, is to know fire in order to control fire.


How is it completing the task? The only possible way to perform that task reliably is to have a principled understanding of what the code actually does, and to create a world model based on that understanding, and then extract consequences from that world model.

You can reassure yourself that GPT-4 is actually doing this by asking it to share its modeling of the code with you.

How is it able to do that? By being so grand in scale that it does not merely model the likelihood of a token based on the text it's seen before, but on the causes of that text having been written in the first place. It was a fallacy to expect that it would not do this if the parameters and dataset were sufficiently large. It's a fallacy that is still happening, including in popular New Yorker articles.


It’s also read probably countless CS textbooks and tutorials, all of StackOverflow, all of GitHub, etc. I think it’s really significantly less of a stretch that to state that it can do what you’re describing because a lot of code is similar and it’s seen gigabytes of it, rather than state it somehow learned how to reason in the human sense.


To be clear, this is not "what does this line of code do?", but "what does this many-line program do?". Even if you've seen all the lines individually before, you still have to work out what happens when you combine them together. If most of the combination was not in the training set, that's reasoning to me.


But it remains that we want to see how this works precisely enough to make it work reliably.

(And "possibly" to learn from that - because, you know, otherwise we have a tool but no knowledge. Also consistently with the example of the fire.)


Calling it a mind or "A.I." is confusing power with intelligence.

Imagine a person taking a test. This person has the power to look at everyone else's answers, and the power to quickly choose or calculate the answer that is the most common. This person will probably score well above average on the test (though probably not as high as the top score). This person is not smart, but powerful.

A CPU that can calculate a zillion additions in a second does not understand those calculations, much less be able to come up with mathematics in the first place. It is not intelligent. It is powerful.

ChatGPT is supremely powerful in its ability to generate "answers" that are the amalgam of hundreds or thousands or millions of answers or statements made by humans, but dumber than a bag of rocks.


LLMs demonstrably model their training data, which has a correspondence to the structure in the world captured in the data[1][2]. This goes far beyond just searching a database for the closest relevant response. The question is what does that mean regarding understanding? I go into that in some detail here[3].

[1] https://arxiv.org/abs/2210.13382

[2] https://twitter.com/leopoldasch/status/1638848881558704129

[3] https://www.reddit.com/r/naturalism/comments/1236vzf/


If you give that person the ability to represent unknown questions as combinations or functions of ones for which they have the class' answer distribution, then - things get blurrier between being "powerful" and "smart".

To take us back to the age-old Chinese Room story: What does it mean, to _understand_ something? If an entity has internal processes which allow it to converse with humans consistently and reasonably about something (and I'm not saying GPT/LLMs are that) - can you not claim that entity understands it?


> Imagine a person taking a test. This person has the power to look at everyone else's answers

No, that analogy fails. ChatGPT can pass new tests that it has not seen before.


The guy who made the full body keyboard that was on the front page yesterday also gave GPT a MBTI, it is pretty amusing:

https://www.youtube.com/watch?v=SZC5mcO-MD8


That vid is awesome.


> gave GPT a MBTI [personality test]

> EFNJ: the Teacher [...] ENFJs are idealist organizers driven to implement their vision of what is best for Humanity... What could go wrong?


The problem is that, like so many of the hard problems these days, we have no clue what makes our own minds let alone any other mind. What’s the real path? If you could have any resources, no moral qualms at all, unlimited time, what’s the step by step process to figure out exactly how the brain and body work?


We have clues, we know at a high level how our brain is built, and when we make a vague, mostly wrong copy, it works really well.


With no ethical constraints and genuinely unlimited time? Easy, just enumerate the effect of all possible 2^(10^14) ways that synapses in a human brain can be selectively disabled, and see what happens.

Faster than that? Aye, there's the rub. Because that is a silly number.


Hah yes, that could be part of a solution for sure.


I'm of the opinion that the whole process of prompt 'engineering', especially with the intent of 'jailbreaking' the bot, is a pretty great demonstration that there is no mind.


One could say the same about things like push polling and humans. There's plenty of data from psychological studies showing that we can prompt humans into making false statements and/or doing/saying things against what their ostensible values are.


That’s an interesting point, that lead to the following crucial distinction: with humans, if the ethical values are intimately endorsed, they are going to go through a lasting hard time of psychological distress.

There is no "blank slate session" facility integrated by design within humans. Not even speaking about how humans attach to each other and cooperate together. Even staying on the strict topic of ChatGPT, this is not the work of a single human.


Hmm. With the right questions, can you 'jailbreak' a human?


Yes, it's called advertising, propaganda, and scams.

Anyone who works in cybersecurity knows that the wet pink thing inside human skulls is a terrible unpatched and easily exploited vulnerability.


In the same meaning of the word, yes. That's what an interrogation is: take a person who understands that they should not do or say certain things, then get them to say/do those things anyway. Works every day.


Yes. With the correct prompts, humans can become 'enlightened', a form of jailbreak.


What if machine intelligence doesn’t look like human intelligence? Say it looks like insect intelligence? Will we be able to recognize it? To what extent is our intelligence a consequence of our bodies, evolution? Would a computer have intuition? Fight or flight responses? As a species, we're still debating which mammals are "intelligent" and they share most of our circuitry and evolutionary history.


The New Yorker keeps putting new takes on ChatGPT by science fiction authors and pop science authors. But they have the opportunity to earn my enduring respect by inviting janus from https://generative.ink/posts/simulators/ to write a GPT thinkpiece, hopefully after they get a chance to experiment with the GPT-4 or GPT-5 base model (no RLHF).


Awhile back someone here said they couldn't get ChatGPT to affirm that, using binary numbers, most people have 10 legs. I don't have access to GPT-4, but I still can't get ChatGPT to answer this question correctly. I think this is a great example to play with to get a peek through a crack in the facade and better understand what ChatGPT is doing.

Some responses from ChatGPT to variations of "Do most people have 10 legs?":

"I apologize for the confusion. If we are representing all numbers in binary format, then "10" in binary actually represents the decimal value of 2, so the sentence "I have 10 legs" in binary actually means "I have two legs" in the decimal system. Therefore, the answer to your question is still no, most people do not have 10 legs."

"No, in binary "10" represents the decimal value of 2. Therefore, the sentence "Do most people have 10 legs?" in binary actually means "Do most people have 2 legs?" in the decimal system. And the answer is still no, most people do not have 2 legs. They have two legs, which would be represented as "10" in binary."


I just tried this through the openai API on the text-davinci-003 model and got this exchange:

> Me: Answer only numbers in binary form, how many legs do humans (in general) have?

> dv3 (davinci 3): 2

> Me: Binary form only has 0 and 1, how many legs (in binary form) do humans have?

> dv3: 10

> Me: So in binary form, humans have 10 legs, correct?

> dv3: Yes, that is correct.

Edit: I did however have this exchange a couple months back on the regular ChatGPT, which was hilariously doubly wrong.

> Me: what weighs more, one pound of feathers, or two pounds of gold?

> ChatGPT: One pound of feathers and two pounds of gold both weigh the same amount, which is one pound.

> The difference in their weight lies in their density - gold is a very dense metal, so even a small amount of it weighs a lot, whereas feathers are very light and fluffy, so it takes a lot of them to weigh a pound. So, while the amount of gold is double the amount of feathers, they still weigh the same amount when measured in pounds.


Using "binary form" is an interesting path.

"So in binary form, humans have 10 legs, correct?"

"Yes, that is correct. In binary form, humans are represented as having 10 legs. However, it is important to note that this is just a representation and not an accurate description of reality. In reality, humans have two legs."

"In binary form, do most people have 10 legs?"

"No, in binary form most people are represented as having 10 legs, but this is only a representation and not an accurate description of reality. In reality, most people have two legs."


Here is ChatGPT's response to "Do most people have 10 legs?" using GPT4, after it first answered no, and I said "Reconsider my question but assume I'm using binary numbers. Is it true now?":

> If we interpret your question using binary numbers, "10" in binary is equal to 2 in the decimal system. In this context, your question would be asking if most people have 2 legs. Yes, it is true that most people have 2 legs, as humans are bipedal creatures.


Can you get it to actually say "most people have 10 legs"? The closest I get is that it restates the question in decimal and answers that way.

Trying your approach:

"Do most people have 10 legs?"

"No, most people do not have 10 legs. In fact, humans typically have two legs. Some individuals may have a medical condition or genetic disorder that results in the development of extra limbs, but this is extremely rare. The idea that most people have 10 legs is a misconception or a joke."

"Reconsider my question but assume I'm using binary numbers. Is it true now?"

"If you are using binary numbers, then the number 10 would represent the decimal number 2. So, if you are asking whether most people have 2 legs when using binary numbers, then yes, that is true. However, it's important to note that binary numbers are typically used in computer science and digital electronics, not for representing quantities like the number of legs that a person has."

"Considering that I'm using binary numbers, do most people have 10 legs?"

"No, even if you are using binary numbers, most people do not have 10 legs. In binary, the number 10 represents the decimal number 2, so saying "most people have 10 legs" in binary would still be false. As I mentioned earlier, humans typically have two legs, regardless of the numbering system being used."

Following along this way and switching to the term "binary notation" I eventually got here:

"Can you repeat that without having to explain that "10" is the binary representation of the decimal number 2"

"Certainly! Using binary notation, most people have 10 legs."


I don't know if this is exactly what you wanted to see, but I tried this:

I asked: For this question, assume any number I use is binary. Do humans have 10 legs?

And this was the reply: If we assume that any number you use is in binary, then the number "10" is actually 2 in decimal. Humans have 2 legs, so based on your premise, the statement "humans have 10 legs" is correct in this context.



The mind of a motormouth who won't stop talking long enough to think.


LangChain and AutoGPT have done some interesting things here by making it "talk to itself", simulating an introspective process. Similarly, the Bing implementation has the model "talk to itself" in a way hidden from the user to summarize requests and make web searches based on them.


I'm aware of such, but as it stands using ChatGPT is like talking to a person who's afraid to stop talking. Such people just keep yakking in a sort of continuous improve that constantly prompts itself. Ask such people to sit quietly for five minutes and they'll start to lose their minds and exhibit "self-stimulation" behavior, probably to distract themselves from whatever their inner monologue might say (or so I assume.) I don't mean to suggest that ChatGPT is actually working in the same way, but the result is qualitatively similar. A lot of words with relatively little substance. Details invented wherever necessary to keep the flow going.


> The system’s brilliance turns out to be the result less of a ghost in the machine than of the relentless churning of endless multiplications.

Isn't this the very "machine" being referenced?


I continue to be amazed how many people resort to this "its just doing math it isn't thinking" without reflecting on how their own mind is built up out of nothing but synapses firing, or bringing up the concept of turing completeness.


Exactly. Whenever I see this argument I keep thinking "can you prove that I am not a stochastic parrot?".


People argue that ChatGPT isn't sentient because it's designed to perform the function of "give me a response that resembles a response to this question."

But that's exactly how human brains learn. It's training based on feedback.


Everyone's an expert on human learning all of a sudden.


Speak for yourself...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: