Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Crypto collapse? Get in loser, we’re pivoting to AI (davidgerard.co.uk)
63 points by Al0neStar on June 4, 2023 | hide | past | favorite | 53 comments


The first third of the article is kinda polemic, it won't really convince anyone needing convincing.

This however I thought was the substance of it:

> The VCs’ actual use case for AI is treating workers badly.

> The Writer’s Guild of America, a labor union representing writers for TV and film in the US, is on strike for better pay and conditions. One of the reasons is that studio executives are using the threat of AI against them. Writers think the plan is to get a chatbot to generate a low-quality script, which the writers are then paid less in worse conditions to fix. [Guardian]

> Executives at the National Eating Disorders Association replaced hotline workers with a chatbot four days after the workers unionized. “This is about union busting, plain and simple,” said one helpline associate. The bot then gave wrong and damaging advice to users of the service: “Every single thing Tessa suggested were things that led to the development of my eating disorder.” The service has backtracked on using the chatbot. [Vice; Labor Notes; Vice; Daily Dot]

> Digital blackface: instead of actually hiring black models, Levi’s thought it would be a great idea to take white models and alter the images to look like black people. Levi’s claimed it would increase diversity if they faked the diversity. One agency tried using AI to synthesize a suitably stereotypical “Black voice” instead of hiring an actual black voice actor. [Business Insider, archive]


Which really is just the latest in the time honored "capitalism was the real villain all along" tradition.

Wow look at this thing that has the potential to do labor that previously required a human's time and effort. When you capture the value it means you can either do more or have more leisure. When you don't capture the value it means the treadmill of "being valuable to my employer" just got faster.


I can see you understand the criticism being made, and I can see you're dismissing it, but I don't see why. You've accurately described mechanisms at play, but you don't justify them.

Why should VCs continue to "capture the value" of our leisure time and drive it towards zero? I understand that makes them lots of money, but why is that a good way to run society?

Consider the same reasoning in a different scenario:

I'm so tired of people complaining about their cancer. Yeah, your cells divide. When they divide the way they're intended to, you live a normal life. When they divide out of control, your health suffers. Thems the breaks.

I think we can all agree this argument is ridiculous. It describes the mechanism at work, acknowledges the harm that's taking place, but ultimately the argument here is "if we have an explanation for what's happening, then it isn't a problem, it's a fact of life." Clearly that's not so - just because we understand cancer, doesn't mean we accept it.

Help me understand why your argument is different?


> Wow look at this thing that has the potential to do labor that previously required a human's time and effort.

Yes, but also it still does "require a human's time and effort", what else is the required training data for these models except a large amount of "human time and effort". It's just scraped for free and all munged together. Bulk, unpaid labour.


I agree that a lot of crypto grifter seem to be moving on to AI. But it seems like the author is doing the same thing. Since no one is interested in reading his criticism of cypto anymore, he is now moving on to AI. Legend.


I don't think he's moving to AI. It's a whole lot harder to mass scam people in AI than in crypto. BTC and ETH are still pretty high in price. He's still only covering crypto. But he's right that many scammers, grifters, opportunists have moved to AI.

Even the ones left in crypto are basically trying to associate their crypto projects with AI as much as possible.

If real estate was hot, crypto would try to associate with real estate. If room temperature superconductors are invented, they'd say crypto for superconductors. Basically, in order for useless crypto shitcoins to get attention, they attach themselves to anything that's hot.


co-author here: lol I assure you we have no wish to continue on this AI path. OTOH, we both got new patrons all of a sudden. And there's a similar article to write about quantum computing grift, with massive misselling of systems that might one day be able to factor numbers as high as 35.


It doesn’t need to be AGI to be dangerous. As argument, just consider how the next elections will be. GPT is at least a cannon for fake news. A fake News machine gun


I agree that this article leans too much into "antihype" and dismisses the danger. The criticism that AI safety regulations are a pretty obvious attempt to erect a regulatory moat is an important one however.


But then the problem is not GPT/DumbAI, it is it's users.

Alternatively, what you have said is true of all tools: a hammer does not need to be conscious to be dangerous, it just needs to be put in the hands of bad actors; well yeah, that is true of everything all the time...


It's crazy to think Sequoia was started by Valentine who invested in so many iconic companies.

And now just look at Sequoia. The bottom of the barrel with the worst takes in technology. How much a company can fall.


The best investment calls are on trends that the majority initially dismiss. The mismatch between public expectations and real potential is the source of returns. So they may still be great investors, making calls that you lack the vision to appreciate.


thanks ETH_start. lol.


You're welcome :)


I do like this quote:

"Don’t want to worry anyone, but I just asked ChatGPT to build me a better paperclip."

Given the means, would it kill anyone who gets in the way of more paperclips?


This quote is referencing a famous hypothetical example https://en.wikipedia.org/wiki/Instrumental_convergence?wprov...


Good to know !


From reading this article, it seems that the biggest reason AI sucks is that it won't actually destroy humanity.

This article seems more like a critique of AI doomerism.


Bad writing,.combining two independent and unrelated things and not knowing what and how LLM and a lot more works.

Funny this peace could have been written by chatgpt with the same or better quality.

Only good thing is that crypto VC is now starting to invest in something worthy


Yeah QQQ is up 10% and BTC down 10% from 2 month ago. Going to get worse for Bitcoin and crypto overall. Bitcoin does not generate profits, does not benefit from AI hype/boom, does not hedge inflation. It has no upside, only downside.


“ChatGPT, a chatbot developed by Sam Altman’s OpenAI and released in November 2022, is a stupendously scaled-up autocomplete. Really, that’s all that it is.”

Stopped reading after this part. To be so intentionally misleading and dismissive of truly miraculous work by the LLM community and the OpenAI team is not for me.


Thats a reference that Sam makes in interviews.


What is this guys tech credentials


Why does that matter? Is what he saying true or false?


Feels more like polemic and opinion than something that’s true or false


co-author here: I'm a sysadmin by day, haven't worked anywhere near AI.

But our main applicable experience is as finance journalists writing up crypto grifters. And it's not only the same grift, but literally a lot of the same grifters.

It's never about the technology, it's always about the money.


me and the bros have already fully pivoted


>People’s susceptibility to anthropomorphizing an even slightly convincing computer program has been known since ELIZA, one of the first chatbots, in 1966. It’s called the ELIZA effect.

I'm tired of these arguments. Very very few people are anthropomorphizing chatgpt.. very few. The majority of people both technical and non technical who have played with chatGPT in a non trivial way are aware of the chatbots limitations. It's like recognizing a crazy person on the street. Humans are well equipped for that.

This argument characterizes the average person as some kind of stupid buffoon as if he/she can't tell chatGPT really screws shit up. Sure there are a few gullible outliers but as a generality his claim is simply completely false. Pretty much everyone and I mean everyone is aware about the limitations of LLMs.

This is a weak and repeated trope that's being regurgitated as if the critics are LLMs themselves.

Let me specify exactly what's going on. People who are afraid of/support AI are more speaking to the potential of AI. Why? Because just as much as this thing hallucinates about half the time it answers a complex question with an equally complex and correct answer and that answer in isolation is often indistinguishable or even at times superior to a humans answer.

Yes we know it hallucinates and forgets shit. This is obvious, no need to readdress an obvious weakness that everyone is aware about. If critics want to have a real discussion then they seriously need to address the actual strengths and phenomenons of LLMs instead of repeatedly highlighting the obvious weaknesses.

Because while we have somewhat of an explanation for the hallucinations we currently don't know how chatGPT was able to do something like this:

https://www.engraved.blog/building-a-virtual-machine-inside/

Read to the end of you haven't seen this. The ending is what is quite unexplainable by experts. You can't just trivialize that entire post as if it was just a statistical phenomenon. There's obviously an alternative angle here.


Wrong. Too many people do anthropomorphize LLMs. And few people actually understand how LLMs work, or how to detect errors and hallucinations. Calling LLM autocomplete "AI" deliberately conflates the tech with sci-fi tropes that many people already confuse with reality. Numerous examples, from ELIZA to Watson, internet-based scams and frauds, full self-driving, to all of crypto demonstrate human credulity and gullibility, magnified if people think they will either profit or lose financially.

The technology behind LLMs probably does have some interesting and valuable applications. But look at the hype and follow the money with a skeptical eye. If so-called AI does or will soon deliver real benefits then why all of the hype around it? Show, don't tell, and especially don't tell stories based on possible future advances or breakthroughs.

Future editions of Extraordinary Popular Delusions and the Madness of Crowds will have long chapters about crypto and AI. If you think most people understand the tech and its limitations and don't act according to gullibility, ignorance, and greed that book should disabuse you.


I would say the most interesting thoughts I have read on this are from Blaise Aguera y Arcas from Google last summer.

He believes we basically anthropomorphize each other. He made a great point how every child basically anthropomorphizes a doll. Anthropomorphizing is just second nature to us.

I would highly recommend reading his two longish medium essays on the subject.

The part about AI and Extraordinary Popular Delusions and the Madness of Crowds is just utter bullshit. Come on. I am sure you don't really believe that.


[flagged]


While I believe much of what you argue is correct, I do unfortunately also believe that ultimately you are incorrect:

For the non-techy part of the population, and certainly eve a percentage of the techy part of the population, there will be those who don't have an hour out of their daily lives to play around and discover the limits of chatgpt etc.

Worse still, an even larger portion of that crowd, even if they have an hour to spare, just... won't. Playing around with an LLM just isn't something most people would find entertaining or enlightening enough to do out of their own volition.

What I find more likely to happen, is that the members of above crowd will instead slowly, over time, get the occasional dose of exposure of an interaction with an LLM, and as long as the response yielded isn't batshit crazy, these people will likely develop a growing sense of confidence in these LLMs.

At that point, it will likely be very difficult proving to them that the LLMs are far from perfect.


Couldn't "proving to them that the LLMs are far from perfect" be accomplished by showing a few examples of LLM hallucination? This does not seem very difficult.


A significant portion of the American population doesn't accept the results of the 2020 election. No amount of proof will change their mind.

Everyone alive has been exposed for decades to both sci-fi tropes and media hype around AI, and the ideas that stick in their heads tend to be those repeated and reinforced by media, not by direct experience. When LLMs get rolled out to everyday users who aren't experts it will come in the form of chatbots or plugins or text summarizers or code writers, not in the form of carefully fact-checked conversations with ChatGPT. Lots of people already use ChatGPT through Bing, and they aren't likely to check what it tells them.

We see people misled and scammed every day because of their own ignorance and misinformation promulgated by both mainstream and social media. I can't begin to understand how anyone would think -- contrary to all evidence -- that the huge population of non-experts will figure out the limitations of an opaque technology on their own after a few interactions with it.


>A significant portion of the American population doesn't accept the results of the 2020 election. No amount of proof will change their mind.

This is different. There's obvious bias here. Many people don't want to believe the election is legit because of team and group mentality. People tend to believe what they want. Additionally, people can't "see" the results. It can't fully be proven because there is always a layer of indirection here where you need to trust a potential compromised source. From the perspective of the public what happened during the election can only be ascertained through a network of indirect sources so it's convenient for people to assume any one of those sources is compromised in a way such that the conclusion is closer to the one they desire.

For chatGPT seeing is believing. You can see the thing hallucinates right in front of your freaking eyes. There is no layer of indirection. There is no room for someone to lie to themselves. Additionally, the bias for chatGPT is actually in the other direction. Nobody wants to believe that an AI can trivialize their skill set. People would rather believe chatGPT is garbage because that is what they prefer to believe.

In fact I would argue this exact bias is the thing effecting many people right now. The same type of biases that make people believe the trump votes are rigged are the same type of biases that prevent people from even considering the fact that an LLM is more than just a stochastic parrot. They don't want to believe it... So they don't.


> There is no layer of indirection. There is no room for someone to lie to themselves. Additionally, the bias for chatGPT is actually in the other direction.

Bing search and customer service chatbots, for example, give a layer of indirection. Spam emails, LLM-generated legal briefs and term papers have indirection when the recipients (judge, professor) don't interact directly with the LLM. Since interacting directly with ChatGPT takes some skill and doesn't seem immediately useful most people will interact with it through things like search engines and friendly chat widgets and word processor plugins, just like programmers already interact with an LLM indirectly with Github Copilot.

> Nobody wants to believe that an AI can trivialize their skill set. People would rather believe chatGPT is garbage because that is what they prefer to believe.

They may not want to believe it, but you must have seen the numerous articles -- many of them posted on HN -- about exactly that happening. Not a day goes by that HN doesn't get multiple posts expressing fear and worry about "AI" taking over their job soon, or making the job redundant. And people may simultaneously believe "ChatGPT is garbage" and worry that they will lose their job, or get killed by a robot drone.

I argue that too many people already have a bias towards believing ChatGPT/LLMs equals AGI, because the media has primed them to believe that. The term "artificial intelligence" itself gives it away. If no one used "AI" to refer to ChatGPT et al. and instead called them large language models that might help people realistically evaluate LLMs as tools rather than as a true artificial intelligence. The term AI has been applied to so many ideas, fantasies, experiments, and now products that it means everything and nothing, and every individual can and will interpret that according to their own biases and knowledge. Of course "AI" sells a lot better than "LLMs" and we're seeing the self-serving hype in full-swing already, as numerous companies and VCs try to capitalize and recoup their losses from the last hype cycles that people got wise to (crypto) or never got interested in to begin with (Web3 and metaverse).

I'm old enough to remember when scientists successfully cloned a sheep, and immediately the media, popular and specialized, cranked out story after story about how cloning would reshape humanity in just a few years. We were told that human clones were just around the corner, with all the attendant hand-wringing. Of course that never happened, but I wouldn't find it all surprising to poll random people and find that they believe human cloning happens all the time, because the hype didn't get followed by a correction or apology.


>Bing search and customer service chatbots, for example, give a layer of indirection

There is no layer of indirection you are directly chatting with the AI. You are not having a third party describing his experience with the AI to you.

>I argue that too many people already have a bias towards believing ChatGPT/LLMs equals AGI, because the media has primed them to believe that.

No point in arguing if you don't have some form of evidence. My evidence is there isn't a single person on this thread who is fooled by AI or isn't aware of the limitations of current gen AIs.

You just need to find one person in this entire thread who fits your description, link it here and you'll be right as you falsified my statement. This is the data driven Conclusion.

Let's use data to get to the bottom of this. Seriously.


I gave my anecdotal evidence, and the evidence of numerous posts on HN and elsewhere you can easily search for. Or just look at the votes on our comments.

Getting one person to post here with one opinion or another doesn't constitute useful data. It just adds one more anecdote. It looks like no one besides the two of us pay attention to this thread.

In any case I engaged to express my opinion, not to prove you or myself right or wrong in our opinions. Time will tell.


>Or just look at the votes on our comments.

Votes are a popularity contest. I have a lot of downvotes. So you win the popularity contest. It's fine. Im ok with that.

I'm more going for the correctness contest here. Who's actually right? That's all I care about here.

>Getting one person to post here with one opinion or another doesn't constitute useful data

This isn't true. One person lends data to your case. Why? Because my claim is that nearly all people on HN aren't fooled by chatGPT. So if you say it's so common then just find one.

My claim is that it's so uncommon you can't even find one.

>I gave my anecdotal evidence, and the evidence of numerous posts on HN and elsewhere you can easily search for

I searched for this. I could not find one. You claim it's easily found, so you can win this debate by simply finding one comment that proves your point and link it here. If it's as common as you say then at least one person can be found. This makes sense.


That's just your opinion. I say we need to prove this out.

If a significant portion of the tech and non techy population anthropomorphized LLMs to the point where they don't understand that LLMs hallucinate then surely some of those people exist on HN.

If one of you readers is one such person who honestly has no idea what it means for chatGPT to "hallucinate" then let us know (and be honest, please don't troll).

My bet is no one will respond with affirmation because the amount of people who don't get it is miniscule.


You're arguing that something observed so often and consistently that it has had a name for decades -- the ELIZA Effect [1] -- doesn't actually happen often enough to care about.

I have referred to ChatGPT hallucinations with multiple friends and family, some in tech and some not (like my parents and my kids), and with one exception none of them knew what I was talking about. Like most people they think computers can't make mistakes, so it follows logically (for them) that an (apparently) intelligent machine can't make mistakes, i.e. hallucinate. I have a couple of my own ChatGPT transcripts that include hallucinations and when I show those to people they say that I deliberately misled the AI, because how could it make a mistake?

In my own experience, which includes people who work in the software field and people who don't, including a couple of friends who work with neural networks and LLMs, almost no one understands how LLMs work, or what limitations they might have, or what "hallucinate" means in the context of ChatGPT. Almost everyone I know is much more likely to believe AIs have already or will soon put them out of a job and start turning us into slaves or launching nuclear strikes, because that's the nonsense they get fed my the media.

[1] https://en.wikipedia.org/wiki/ELIZA_effect


>doesn't actually happen often enough to care about.

That's my entire point. It doesn't happen often enough to care about.

Sounds like you have some anecdotal experience of it happening to your entire family and a lot of your friends.

I experience the opposite. It has happened to exactly none of my friends and family.

We do live in contradictory universes where you experience one thing and I experience another thing. Given the contradiction let's refer to the shared experience: nobody on this entire HN thread has experienced the Eliza effect. The shared experience proves my pov.

>Almost everyone I know is much more likely to believe AIs have already or will soon put them out of a job

This first part of your sentence has a higher likelihood of being true. The reason is because there are instances of it are already happening. It's limited given the limitations of LLMs but we are at a point where if the hallucinations are fixed then it can very much replace many jobs.

Nuclear strikes and slavery is a bit far fetched.


I think you misread my first sentence.


No. You just mis expressed your point with a logical mistake.

You wanted to explain why I can't find evidence for the Eliza effect on HN, but you didn't realize that it contradicts your overall point of the effect.

I exploited the flaw to point out the contradiction in your thinking. Your ideas are not logically coherent your following a sort of bias here where you're trying to construct ideas to support your bias.


> If you think you're superior because you're in software or IT or you've done ML in different contexts? I'm here to tell you, that unless you're not a layman and actually build LLMs, you're not because all your info is from the same place as all the other people outside of tech.

You don't need expertise building LLMs to interpret the hype and see who has financial interests in promoting non-existent technology, calling it AI and allowing people to get excited and/or scared because the same terminology gets used in 2001 and Terminator. How LLMs actually work has very little to do with how people perceive them, or how they might affect daily life, to say nothing of how they might affect share prices.

> it's the actually the experts who started the extinction AI petition

Some experts, and lots of people who have not themselves built LLMs, such as Elon Musk, Yuval Noah Harari, Steve Wozniak. Look at the foundation behind the letter and show me all the AI experts who know how to build LLMs [1]. Alan Alda? Morgan Freeman? Sure, some people with respected names in the field did sign, as did some VCs and executives most likely trying to buy time for their own products.

Some experts who didn't sign the letter: Stephen Wolfram [2], Juergen "father of AI" Schmidhuber [3], Rodney Brooks [4], Donald Knuth [5]. I think I can safely state that no consensus exists among the "experts" who do know how LLMs work, and how other things that got called "AI" in the past work. You can find just as much skepticism as FUD. Rather conveniently the AI hype-train makes forgetting about Web3, the metaverse, and crypto easier, even if you see the same names come up.

> Anyway an hour with chatGPT and reading some articles on LLMs is enough for anyone to know the boundaries of LLMs.

No, that isn't enough time, not even for someone who knows a little about what they're looking at.

You seem to make two inconsistent rebuttals: Most people can quickly figure out the limitations of LLMs like ChatGPT, and critics who haven't made LLMs don't know what they're talking about and should shut up. As for the first claim, I'll point out that people have used microwave ovens for five decades and way too many everyday users associate microwave ovens with radioactivity. People drive cars every day and can't explain the first thing about the technology. I think you grossly overestimate how deeply people understand technology they use every day. Just ask a cell phone user how cellular communications works.

Dr. Johnson punctured the second argument a long time ago: "You may scold a carpenter who has made you a bad table, though you cannot make a table. It is not your trade to make tables."

> Mind you, there will be stories of people that are fooled but those people are obviously a huge minority of chatGPT users.

ChatGPT is already plugged into the Bing search engine, and into Windows, MS Office, and other products. New integrations and plugins get announced every day. The technology is rolling out behind other products and applications, and very soon (if not already) regular users will interact with ChatGPT or something like it without knowing that. They may spend hours with a chatbot and not interpret bad information as a hallucination or error. As so many experts have already talked about, the danger from what we're calling AI this decade isn't SkyNet or getting turned into paperclips, but people fooling themselves, or getting deliberately tricked and scammed, by a technology that they don't understand and have attached sci-fi capabilities to. People with financial interests like Sam Altman out there promoting fantasies of AI threatening humanity perpetuate the ignorance by pretending we're on the edge of the sci-fi dystopia, when actually LLMs are nowhere near AGI, and Altman knows that. "Look over there, watch out for the killer robots while I pick your pocket." Ker-ching.

[1] https://futureoflife.org/open-letter/pause-giant-ai-experime...

[2] https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...

[3] https://www.forbes.com/sites/hessiejones/2023/05/23/juergen-...

[4] https://spectrum.ieee.org/gpt-4-calm-down

[5] https://www-cs-faculty.stanford.edu/~knuth/chatGPT20.txt


>Some experts, and lots of people who have not themselves built LLMs, such as Elon Musk, Yuval Noah Harari, Steve Wozniak.

Never denied this. I did not intentionally hide this. I will say your statement did not mention that one of the signers is the premier expert. The person who reignited interest in neural networks, Geoffrey Hinton.

It's not "some" experts. It's many many preeminent experts. It's more accurate to say, not "all" experts. That is the more fair characterization.

> People drive cars every day and can't explain the first thing about the technology. I think you grossly overestimate how deeply people understand technology they use every day. Just ask a cell phone user how cellular communications works.

Nobody fully understands these networks. Not even the people who build them. The "experts" admit this. What everyone understands is that these networks hallucinate. The hallucinations aren't deep things to understand, it's on the level of driving cars.

You misunderstand how trivial it is to use this technology and figure out the limitations just like you figure out how to drive a car. We are all talking about user level phenomenons. You don't even have to describe what a neuron is or what backpropagation is to understand "hallucinations". Hallucinations aren't technical level concepts.

>ChatGPT is already plugged into the Bing search engine, and into Windows, MS Office, and other products. New integrations and plugins get announced every day.

Yeah pretty much everyone I know has used LLMs in some form or manner. And none of them were fooled. I'm sure the same goes for the people you know. Yeah you characterized how popular LLMs are. What you failed to characterize is the proportion of people who are using the tool and getting fooled by it.

It is fact that the tool is popular and getting more popular but this says nothing about the topic at hand. How many people are getting fooled by chatGPT? Your just speculating the outcome based on how popular the tool is. Speculations prove nothing.

I'm still waiting for that one guy to come to this thread and tell everyone chatGPT is sentient. If one guy claims this then you're right I completely misunderstood how dumb people can get.

I mean don't even focus on this thread under my post. Let's focus on the entire thread under the main article. Is there even one person who completely thinks the AI is sentient?

Find one guy on this entire thread who is honestly fooled by chatGPT and I'll concede to your argument. If you can't find even one guy... Then if you're rational you'll see that my argument is true: nearly no one is fooled by chatGPT.

This is literally a data driven conclusion. Let's use some amateur science to get to the bottom of this.


> Nobody fully understands these networks. Not even the people who build them. The "experts" admit this.

That's repeated a lot but it's not entirely accurate. No one can explain how an LLM gives the answers it does (not even the LLM). LLMs have a vast search space of tokens and use probabilities to make their responses non-deterministic. But the people who build and train LLMs do know how they work -- obviously since quite a few people know how to make one. By analogy, if I give a pile of Legos to a six-year-old I don't know what they will make, though I do know the constraints and limits (imposed by how Legos work and what was in the pile). It's not correct to say "I don't understand how Legos work" when I really mean "I can't predict what a six-year-old will make from a pile of Legos."

> You misunderstand how trivial it is to use this technology and figure out the limitations ... Hallucinations aren't technical level concepts.

I get that. But when I have discussed ChatGPT hallucinations, with examples from my own chats, I'm surprised when people don't even recognize the hallucinations until I point them out. Anecdotally people seem to defend the "AI" by accusing me of misleading it, or giving unclear information. They have anthropomorphized and then want to impose human notions of fairness, give the "AI" the benefit of the doubt even when they know that a person would have not made the mistake, or answered "I don't know" rather than confidently make stuff up like ChatGPT will. I think people don't believe computers can lie or make a mistake -- they imagine an intelligence like Mr. Spock at the other end, not a stochastic parrot.

I got ChatGPT to tell me -- in its confident and authoritative tone -- that no even number is also evenly divisible by 3. When I gave it contradictory examples -- 6, 12, 24 -- it then apologized but maintained that no even number was divisible by both 3 and 5 (um, 30, 60, 90...). I was just trying to get it to solve FizzBuzz with some variations. I could feed my younger children that same misinformation and they wouldn't question it. I could tell my parents that their Alexa listens to everything they say and records it forever on a big disk on a satellite in orbit, and they would believe me. Elon Musk can tell the world Teslas can drive from SF to New York without human intervention and get a whole TEDTalk audience and media "experts" to believe him. P.T. Barnum had some quips about that tendency.

> Find one guy on this entire thread who is honestly fooled by chatGPT and I'll concede to your argument. If you can't find even one guy... Then if you're rational you'll see that my argument is true: nearly no one is fooled by chatGPT.

That's not proof of anything. Maybe no one will chime in on this thread but you can easily find posts and comments on HN with people claiming LLMs are sentient. A Google researcher said that publicly (he got fired), and much discussion took place here. If I was more interested in the topic I could poll people, but I'm pretty sure I would find that quite a lot of people think current "AI" (LLMs like ChatGPT) are sentient, or they will be in the next couple of years. And then I could ask them what "sentient" means and never stop face-palming.


Especially as the erroneous conflation of "sentient" and "sapient" has strong science-fiction roots, too. I've seen them often wrongly conflated in science-fiction stories, not so much outwith science-fiction. As I have mentioned before, my suspicion is that decades ago some influential science-fiction author or editor made this error, and it stuck.


>That's repeated a lot but it's not entirely accurate. No one can explain how an LLM gives the answers it does (not even the LLM).

Uh I literally said no one fully understands these networks. And you go on to say that my statement isn't accurate then confirm my statement by saying:

>No one can explain how an LLM gives the answers it does (not even the LLM).

I mean this is exactly what I said. We can't explain it... Because we don't fully understand it..

>But the people who build and train LLMs do know how they work -

No they actually don't. The surprising accurate responses of chatGPT were actually not predicted. Many experts literally do not fully understand what's going on. This is categorically true and I can quote them if you need it, but this is easily googable.

>I could feed my younger children that same misinformation and they wouldn't question it. I could tell my parents that their Alexa listens to everything they say and records it forever on a big disk on a satellite in orbit, and they would believe me.

>A Google researcher said that publicly (he got fired), and much discussion took place here. If I was more interested in the topic I could poll people,

I already polled people and did a Google search of HN. My other post was a poll. And a Google search of yielded nothing. This is actually quite strong proof. HN has multitudes of users, not being able to find one is a nearly 0 ratio.

The researcher who got fired by Google is an interesting case. The reason is because he's not referring to gpt4 or gpt3.5 or bard. In subsequent interviews he has said that he's referring to lamda. An internal google LLM that hasn't been released. He said that one is "awake" and specified directly that it's different from the LLMs the public currently plays with.

Nobody can confirm or deny that statement because we can't directly interact with the lamda AI as Google has it locked down pretty hard.


Yeah it's an interesting divide. Two EECS professors actually teaching deep learning said in lecture nobody truly understands anything right now. Then you have many other scientists who call it a stochastic parrot, or super autocomplete. I would love to see a public panel discussion between a bunch of experts in this, see them air out their views.


It is ironic that you just anthropomorphized it yourself by using "it forgets".


When you delete things from your hard drive that fits the definition of making your computer forget something.

Look up the definition of forget. It is not a human exclusive action. Therefore it is not "anthropomorphizing": https://www.merriam-webster.com/dictionary/forget


co-author here: I warn you, I've got a paperclip and I'm not afraid to use it.


Don't actually understand your joke here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: