A gun is not smart or magical but is nevertheless a powerful tool that can be scary depending on who is holding it. Accordingly, I worry less about my occupation and more about the moral character of those wielding it. Further i worry about "smart" people who have not been acquainted with the dark side of human nature facilitating bad actors.
Statistically speaking though if you leave guns lying around everywhere or introduce guns as a service for only $10/month and tell everyone that guns will solve all your problems, then you’re going to end up with fuckwits with guns.
A gun’s purpose isn’t to be smart. The gun equivalent of this post would be “Why guns still aren’t very good at killing” and that would be a serious problem for guns if that were true.
When a toddler can pull the trigger and kill someone, you may argue guns are pretty good at killing. Key point being, people don't have to be good at guns to be good at killing with guns. Pulling the trigger is accessible to anyone.
How often does that actually happen? It's only when a fun owner was irresponsible, leaving a loaded gun in an accessible place for the toddler.
Similarly, AI can easily sound smart when directed to do so. It typically doesn't actually take action unless authorized by a person. We're entering a time where people may soon be willing to give that permission in a more permanent basis, which I would argue is still the fault of the person making that decision.
Whether you choose to have AI identify illegal immigrants, or you simply decide all immigrants are illegal, the deciding is made by you the human, not by a machine.
Not the OP, but my best guess is it’s an alignment problem, just like gun killing what the owner is not intending to. So the power of AI to make decisions that are out of alignment with society’s needs are the “something, something’s.” As in the above healthcare examples, it can be efficient at denying healthcare claims. The lack of good validation can obfuscate alignment with bad incentives.
I guess it depends on what you see as the purpose of AI. If the purpose is to be smart, it’s not doing very well. (Yet?) If the purpose is to deflect responsibility, it’s working great.
My issue with this line of thinking is not that it's wrong but it's being manipulated by silicon valley.
Open AI is not arguing that AI is harmless they are agreeing it's dangerous. They are using that to promote their product and hype it up as world changing. But more worrying they're advocating for regulations, presumably the sort that would make it more difficult for competition to come in.
I think we can talk about the potential dangers of AI. But that should include a discussion on how to best deal with that and consciousness of how fear of AI might be manipulated by silicon valley.
Especially when that fear involves misrepresentation - eg. AI being presented to the public as self directed artificial consciousness rather than algorithms that mimic certain reasoning capabilities
I think the acknowledgement of danger by various companies is definitely a marketing tactic to a degree, and it's important to see the actions of those companies for what they are.
But then there's whatever danger actually exists regardless of the business maneuvering.
I'm not saying this is what you're doing, but I've been in numerous discussions where where someone will point to this maneuvering and then conclude that virtually all danger is manufactured/nonexistent and only exists for marketing purposes.
> eg. AI being presented to the public as self directed artificial consciousness rather than algorithms that mimic certain reasoning capabilities
I think the fact that these tools can be presented in that way and some people will believe it points to some of the real dangers.
If you hold a gun and point it at me I'd be more scared of that then you holding AI and pointing it at me.
AI right now is not powerful and not scary.
But follow the trendlines. AI is improving. From 2010 to now the pace is relentless with multiple milestones passed constantly. AI right now is a precursor to something not so dumb and not so "not scary" in the future.
This is a frustrating piece on many levels, but mostly because it doesn't really scratch the surface of what is worrisome about AI. It sets up straw men and knocks them down, but not much else.
It seems to boil down to:
1. LLMs aren't actually "learning" the way humans do so we shouldn't be worried
2. LLMs don't actually "understand" anything so we shouldn't be worried
3. Technology has always been advancing and we've always been freaking out about that, so we shouldn't be worried
4. If your job is automate-able, it probably should be eliminated anyway
What's scary is not that these models are smarter than us, but that we are dumb enough to deploy them in critical contexts and trust the output they generate.
What's scary isn't that these models are so good they'll replace us, but that despite how limited they are, someone will make the decision to replace humans anyway.
What's scary isn't that LLMs will displace good developers, but that LLMs put the power of development in the hands of people who have no idea what they're wielding.
> Sure, with a millions upon millions of training examples, of course you can mimic intelligence. If you already know what’s going to be on the test, common patterns for answers in the test, or even the answer key itself, then are you really intelligent? OR are you just regurgitating information from billions of past tests?
How different are humans from this description in actuality? What are we if not the results of a process that has been optimized by millions upon millions of iterations over long periods of time?
> What's scary isn't that these models are so good they'll replace us, but that despite how limited they are, someone will make the decision to replace humans anyway.
is this a real threat? if a system/company decides to replace a human with something that is less capable wouldn't that just result in it becoming irrelevant/bankrupt as it is replaced by other companies doing the same thing the more efficient (and in this case traditional) way?
Has any company ever truly believed that outsourcing their customer service to India would improve the experience for their customers? No, but they did it anyway, because it cut costs. AI can be obviously worse than humans and still put them out of work, because GPUs are much cheaper than people.
Not necessarily. Imagine a health insurance provider even partially automating their claim (dis)approval process - it could be both lucrative and devastating.
A similar issue arises with health insurance. Using AI to evaluate claims is a huge efficiency play and you don’t have much ability to fight it if something goes wrong. And even if you can these decisions can be life or death in the short term and human intervention usually takes time.
I'm not entirely sure what you're getting at re: hype.
While there is undoubtedly a lot of hype around these tools right now, that hype is based on a pretty major leap in technology that has fundamentally altered the landscape going forward. There are some great use cases that legitimize some of the hype.
As far as concrete examples, see the sibling comment with the anecdote regarding health insurance denial. There are also portions of the tech industry focused on rolling these tools out in business environments. They're publicly reporting their earnings, and discussing the role AI is playing in major business deals.
Look at players like Salesforce, ServiceNow, Atlassian, etc. They're all rapidly rolling out various AI capabilities into their existing customer bases. They have giant sales forces all actively pushing these capabilities. They also sell to governments. Hype or not, it adds up to real world outcomes.
Public statements by Musk about his intention to use AI also come to mind, and he's repeatedly shown a willingness to break things in the pursuit of his goals.
worse for the consumer or the provider? if the llm is going to fundamentally do a "worse" job no matter what the incentive (whatever that is, maximising profit, maximising claims, whatever that may be) we will end up with the "more efficient" system in charge
the counterpoint to this (which i guess is the tenet of the original comment?) is that hype can shadow good judgement for a short period of time?
This seems to just ignore what's going on in the world. Some parts of the software development domain are definitely being affected by AI. I see a particular shift in web development and possibly embedded systems firmware development (or any other field where there is an exact specification of what you're supposed to deliver).
LLMs don't need to be AGI. There's a race to adjust testing, architecture and infrastructure to fit AI agents who could possibly change them as needed. Some tools are already pretty good and they will only get better.
Long story short, we are definitely going to see massive shift in jobs in the coming years.
You make an interesting point about embedded specifications. But doesn’t that assume two things: 1) the specs are clear and complete, 2) we’re ok trusting AI to develop safety critical software in those subsets of embedded applications?
Dumb stuff is what I want. Classifiers and denoise are really useful.
What I don’t want is a shitty chat bot that barely understands what you ask it and generates shitty code by sticking a corpus of shitty code together then leaving me with the unconstrained job of working out if it has any land mines in it. Then multiply that by 50 engineers. Then tell me that’s the future. Because if it is we are totally fucked. Our decline will be noted by the general decline in capacity to do anything.
I asked myself this question today: Given the AI hype, I wonder whether there will be employers who force their employees to use generative AI. Otherwise employees will be deemed inefficient and be fired.
Employers might believe that not using AI puts them and their investments at a disadvantage, while ignoring the overall quality issues.
If AI provides even a 1% productivity boost, they will, and we'll all be worse off for it when everyone forgets how to code without relying on fallible, proprietary services. Sure, local LLMs are a thing, but they currently are vastly inferior to the cloud offerings, and I don't see the status quo changing any time soon.
We all agree that employers shouldn't be allowed to force the employees to use drugs for short-term productivity boosts. I hope we'll see something similar for AI - it has the potential to be equally damaging to employees in the long term.
Ugh, I really dislike this sort of hand-waving blog post. The author cobbles together an argument dispelling all concerns about AI proliferation by assembling a few Tweets. Don’t mistake this for scholarship. I believe the true answer to the question of whether we should be frightened of these gigantic models is far more nuanced and way less certain.
A more honest blog post would read something like this: Big Ai models are interesting and useful. And there’s the potential for danger. We should be cautious and make sure we have some bright scholars studying what might happen if various scenarios play out. Just as we would want to be in response to any similarly massive advancement in technology.
Crap article with zero insight that wasn't worth my time.
There are better ways to argue the same position but it's probably indefensible.
Neural networks are fundamentally approximators, both when they are approximating long-range relations between concepts (like LLMs do) or when they denoise some noise for images. They approximate thought and intelligence, because we have coded it in our writings. Our writings are a complete fingerprint of the thought process, because we are able to pick up
any thought process from a book, without ever seeing or otherwise having contact with the author. Therefore there is an increasingly more high-fidelity "real thought" in there.
> that in and of itself is a monumental achievement but there is no real thought involved.
Pretty much anything that requires our current level of thought is therefore reachable with ANNs now that we know how to train them in depth and width.
The real question is whether this is enough. We want ASI, but our texts only have AGI, and everything that comes from biology (including intelligence) is logarithmic in scale. There is zero evidence that language models will ever learn to create abstract entities better than any collection of humans do. AI companies are advertising armies of PhD students, but we already have millions of PhD students , yet our most pressing problems have not made a lot of progress for decades. That's what should be worrying to us, not the fact that we will all lose our jobs.
> Our writings are a complete fingerprint of the thought process, because we are able to pick up any thought process from a book, without ever seeing or otherwise having contact with the author
That doesn't follow, even a simple encryption program can make that possible without an intermediate actor being able to crack that.
Humans all run very similar software/hardware so we can read what each others write, but so far the computer isn't close to the same behind the scenes thoughts. Meaning all the things we humans leave out when we write are actually important, so the text isn't everything.
>yet our most pressing problems have not made a lot of progress for decades.
You mean energy and global warming? We will pretty much hit the worst case scenario for both of these. And we've been yapping about these problems for decades so nobody cares even though they are likely 10000x more relevant.
But you could trust a $3 drug-store checkout isle slim wallet calculator (with solar panels for recharging) to perform those calculations and you'd get more nines of reliability from that calculator than the LLM. And it didn't cost hundreds of billions of dollars to develop the calculator, so you won't end up paying thousands of dollars for it.
That LLMs cannot beat a $3 calculator is because they're not fit for purpose. Everything they offer is a hallucination. Just because that fabrication matches with reality some of the time does not make it good. Reliability matters and these things just don't have it.
Nobody is trying to get 7 TRILIONS of dollars investment on calculators being capable of doing any task, but OpenAI swears to god that any day soon their language model will became a general intelligence capable of doing all things, including math.
This is a shallow and dated article, even when it came out in 2023. LLMs are dumb because they can't multiply, and generalize? They can easily write programs and calculate anything you want, and machine learning is all about generalizing.
Hot take: if your job can be partially or wholly eliminated by AI, that’s a GOOD THING. If your job has patterns that predictable or labor that routine, AI automation is a GOOD THING.
What if it lead to widespread under- or unemployment? This social upheaval would lead to people setting aside the niceties of civilization as they fight to meet their basic needs. And such people elect bad governments, which exacerbates the problem.
One might also be scared in a physical sense. The race is afoot to develop physical robots on par with humans. Imagine that as your policeman, or soldier, bringing democracy to a country near you.
The entire point is that LLMs don't think, they're just regressions of information you'd find on the internet. And automation is a good thing. 90% of Americans were farmers in the 1700s; automation completely revolutionized that. Your final point is still a concern in general, specifically what will happen when menial labor no longer exists. At that point, you have to hope that all the wealth generated from automation will be redistributed back in the form of a basic income, otherwise yes, we'll have a lot of people with no means to survive.
Automation is a good thing? It definitely made the rich richer, but I'm not sure it made the average person happier. Depression rates are at an all-time high, after all.
An LLM is one very big nonlinear regression used to pick a token with a clearly defined input, output, and the corresponding weights. It's still far too straight-forward and non-dynamic (the weights aren't constantly changing even during a single inference) compared to the human brain.
As far as the latest "thinking" techniques, it's all about providing the correct input to get the desired output. If you look at the training data (the internet), the hardest and most ambiguous problems don't have a simple question input and answer response, they instead have a lot of back-and-forth before arriving at the answer, so you need to simulate that same back-and-forth to arrive at the desired answer. Unfortunately model architecture is still too simple to implicitly do this within the model itself, at least reliably.
Learning and thinking are separate things. Today's models think without learning -- they are frozen in time -- but this is a temporary state borne of the cost of training. I actually like it like this because we don't yet have impenetrable guardrails on these things.
> If you look at the training data (the internet), the hardest and most ambiguous problems don't have a simple question input and answer response, they instead have a lot of back-and-forth before arriving at the answer, so you need to simulate that same back-and-forth to arrive at the desired answer. Unfortunately model architecture is still too simple to implicitly do this within the model itself, at least reliably.
Today's thinking models iterate (with function calls and Internet queries) and even backtrack. They are not as reliable as humans but are demonstrating the hallmarks of thinking, I'd say.
> At the heart of it, all AI is just a pattern recognition engine. They’re simply algorithms designed to maximize or minimize an objective function. There is no real thinking involved. It’s just math. If you think somewhere in all that probability theory and linear algebra, there is a soul, I think you should reassess how you define sentience.”
We have two words we need to define: “sentience” and “thinking”.
Sentience can be defined as a basic function of most living systems (viruses excluded). You may disagree but to me this word connotes “actively responsive to perturbations”. This is part of the auopoietic definition of life introduced by Maturana and Valera in the 1970s and 1980s. If you buy this definition then “sentience” is a low bar and not what we mean by AI, let alone by AGI.
The word “thinking” implies more than just consciousness. Many of us accept that cats, dogs, pigs, chimps, crows, parrots, and whales are conscious. Perhaps not an ant or a spider, but yes, probably rats since rats play and smile and clearly plan. But few of us would be likely to claim that a rat thinks in a way that we think. Sure, a rat analyzes its environment and comes to decisions about best actions, but that does not require what many of us mean by thinking.
We usually think of thinking as a process of reflection/recursion/self-evaluation of alternative scenarios. The key word here is recursion. and that gets us to “self-consciousness”—a word that embeds recursion.
I operationally define thinking as a process that involves recursive review and reflection/evaluation of alternatives with the final selection of an “action”—either a physical action or an action that is a “thought conclusion”.
What AI/LMM transformers do not have is deep enough recursion or access to memory of preceding states. They cannot remember and they do not have a meta-supervisor that lets them modulate their own attentional state. So I would agree that they are still incapable of what most if us call thinking.
But how hard is adding memory access to any number of current generation AI systems?
How hard us it to add recursion and chain-of-thought?
How hard will it be to enable an AI system to attend to its own attention?
In my optimistic view these are all close to solved problems.
In my opinion the only thing separating us from true AGI are a few crucial innovations in AI architecture—-mainly self-control of attention—-and under 100k lines of code.
Please argue! (I’ve had many arguments with Claude on this topic).
The distinction from human thinking and what the current AI hype cycle calls "thinking" is that all the machine model is doing is outputting the most probable text patterns based on its training data. We can keep adding layers on top of this, but that's all it ultimately is. The machine has no real understanding of what the text it's outputting means or what that text represents in the real world. It can't have an intuitive grasp of these concepts in the same way that a human with our 5 senses can gather over our lifetime.
This is why it's disingenuous to anthropomorphize any process of the current iteration of machine learning. There's no thinking or reasoning involved. None.
This isn't to say that this technology can't be very useful. But let's not delude ourselves thinking that we're anywhere close to achieving AGI.
"Arguing" about this topic with an LLM is pointless.
I disagree. Can we characterize the difference in thinking between a chimp and a human? In my opinion the genetic and biological differences are trivial. But evidently humans did evolve some secret sauce—language. Language bootstrapped human consciouness to what we now call self-consciousness; to a higher level than in any other vertebrate over a period of a few million years and without the benefit of teams of world-class thinkers and programmers.
I have optimism with reason. Humans are fancy thinking animals. I admit I think the “hard problem” in consciousness research is bogus.
> Can we characterize the difference in thinking between a chimp and a human?
We can, but the difference is not in whether we think or not, but in _how_ we think. We both experience the world through our senses and have conceptual representations of it in our minds. While other primates haven't invented complex written language (yet), they do have the ability to communicate ideas vocally and using symbols.
Machines do none of this. Again, they simply output tokens based on pattern matching. If those tokens are not in their training or prompt data, their output is useless. They have no conceptual grasp of anything they output. This process is a neat mathematical trick, but to describe it as "thinking" or "reasoning" is delusional.
How you don't see or won't acknowledge this fundamental difference is beyond me.
We define basic words differently. Yes, some chimps do have theory of mind. But to be equally aggressive in questioning you: how do you not see qualitative difference between us and chimpanzees?
And how do you not see the amazing progress in LLM output? And how do you not see how close LLMs with “chain-of-thought” has gotten to the “appearance” of thinking. And how is it that I not clear enough for you in my first post in highlighting explicitly that LLMs are not yet thinking?
Do not use me as your straw chimp unthinking human.
> Arguing" about this topic with an LLM is pointless.
I don't think so. Unlike many humans a bunch of LLMs can concede a point that being a cucumber eater or a sound maker doesn't exclude possibility to also be a thinker.
I totally buy the consciousness bit. I mean why did people even need to dive deep into "philosophical" discussions when there's nothing wrong with "a conscious horse" vs "an unconscious horse" in the first place which is a dead simple distinction? Sometimes an unconscious horse is just a horse without consciousness.
The problem is how AI is and will be used and how unaccountable it is.
An Ai scans job applicants. Is it biased? Is it conforming with legal requirements? A person's bias can be an issue in a lawsuit. AIs are a much more difficult target, at least for now. This is why companies like them.
You know what the impact will be? More scandals like Hertz falsely reporting people to the police for stealing cars [1].
AI will make financial and lending decisions that could easily become redlining 2.0.
Companies are so desperate to eliminate labor. People have rights. They need to be paid. They can strike.
If the AI purveyors fulfil their dream, companies will be automated but nobody will have any money because nobody has any jobs anymore.
>Humans don't need to learn from 1 trillion words to reach human intelligence.
What are LLMs missing?
A Yann Lecuun quote from the page.
LLMs are blank slates. Humans have millions of years of pretraining recorded in the neural network not as weights but as the structure of the neural network itself. Our physical bodies are biased towards living in an oxygen environment with light and ground and our brains are biased in that same direction.
If you put humans in a completely different context. Like utterly completely brand new context say like 50 dimensional space where all our sense become useless and logic and common sense is completely overridden with new rules...
the Human will perform WORSE then the LLM when trained on the same text.
>AI does not really think, it just seems like it does.
Neither does humans.
Sorry to disappoint the super-AGI hopeful: we're already past that point, because the average human is only a 3B parameter LLM there's nothing magic that comes with an AGI.
The first time GPT2 autocompleted a sentence involving admitting a previous fault of output we were in the post-human era.
Everything we've seen since then have been us living through the legendary post-AGI AI bootstraps launch, but living the future isn't exactly living up to the hype huh?
That GTP-4.5 model that had the Samaltmanists wriggle on the floor in rapturous ecstatic orgasms just last year? Today it's an embarrassment, like the seed-stained lesswrong posters of yore.
The singularity was a star trek plot device.
What we see today is the real deal and I re-iterate my 2015 predictions: by the time an actually scary god like AI is created(say in 2050). We'll have so many thousands or millions of demi-god AI around that the god itself will have to have the good graces to shut up, get a queue ticket in the bare-metal immigration zone with the rest of the rabble. The prospect of true AI divinity will be a (hype-burned) washed out gray with a trace sprinkle of glitter on the already existing background of demi-divine AI.
> Hot take: if your job can be partially or wholly eliminated by AI, that’s a GOOD THING. If your job has patterns that predictable or labor that routine, AI automation is a GOOD THING.
The article never says why this is good. It just states it then moves on.
> As a whole, I’m sure it’ll give birth to entirely new systems we haven’t even conceived of yet and, one can hope, free up that time and energy towards more meaningful or creative pursuits.
The people's who job it takes probably don't give af about the patterns of their job, or creative pursuits they can do while unemployed. They are probably just trying to pay their bills and feed their kids.
there is scary, as in physicaly dangerous
and then there is scarry, as in I cant stand looking at media any more, scary
and scary as in my job is threatened
or the price fixing scams have unhomed you
or the app that provided my job leads, has been super optimised, and is no longer viable
and scary as , all of the above can be ignored
, defended, as the app "just did it defence"
with no one liable for the abuses, scary
"Change is inevitable. You can’t stop the railroad as they used to say. It’s going to kill some jobs but not all of them."
I personally don't worry about jobs. AI is progressing very fast (there are a lof of smart folks working on it, there are a ton of money invested and there's a lot of demand from businesses, governments and individuals). Human inteligence stays the same. I think it's likely that sometime soon AI models will become more inteligent than an average human. And then more inteligent than the smartest human. And then more inteligent than the whole human race combined.
Let's say some worms 600 million years ago could think. And they consider should they kill all mutants and stay forever the pinnacle of evolution as they are or allow some of them to evolve into fish, and then mammals and eventually inteligent humans. I think we are in a position like this - we are currently the pinacle of "creation" in the known universe - do we want to stay this way - by blocking AGI progress, or do we want to allow minds far greater than us to evolve from current LLMs at the cost of probable human extinction eventually.
reply