The latest LLMs are remarkable, but I think the AI Doomer claims are still a reach. My prediction is that the current AI boom will fizzle out into a handful of useful products and a lot of smoke from VC cash. Where are the moon bases? Supersonic jets? Asteroid mines? Self driving cars? We’ve gotten over excited so many times in the past.
This one in particular, I find fascinating, because we've literally just _had_ a hype cycle over it (I remember people telling me they'd be here any day now a decade ago), and now we're onto another AI hype cycle, almost immediately. Which comes first, AGI, or the self-driving car? :)
(I think your other examples are qualitatively different; moon bases, supersonic (passenger) jets and asteroid mining are clearly possible, but impractical for economic reasons; no-one is willing to _pay_ for them. Plenty of people are willing to pay for both self-driving cars and AGI, but there is just no clear route to achieving either.)
I think the main obstacle is that intelligence seems to be inherently unreliable.
There is no shortage of human intelligence in the world. The average person is remarkably intelligent. But they need a lot of learning and practice, and often plenty of organizational support, before they can use that intelligence to do something useful reliably.
We know how to make cars that are stupid and reliable. We know how to make cars that are intelligent and unreliable. But there is still a lot of work to be done before we can make cars that are both intelligent and reliable.
The only AI product I use in my daily work is Copilot. It can usually make useful and accurate suggestions, sometimes even remarkably so, but it also makes stupid mistakes all the time. Copilot is most useful for automating boring routine work, where I can quickly catch and correct its mistakes. It's far less useful for the difficult parts, or when I don't know what I'm doing, because the mistakes it makes are less obvious to me.
Well, I mean, if you're willing to stretch the definition to beyond breaking, sure, but we don't have anything that the average person in the street would think of as a self-driving car. Anyone claiming a self-driving car today is a bit high on their own marketing, and there's no clear route to getting beyond that point.
I summon a Waymo car every few days, and it takes me from point A to point B for about the same price as an Uber or Lyft. This only works so long as point A and B are both in San Francisco, but for an average person in San Francisco with access to the app, self-driving cars are already here. You can argue that person isn't average for a number of reasons, but the cars are real and giving rides to members of the general public.
https://youtu.be/-Rxvl3INKSg is a news reporter's video of taking a ride in one, so you can see what it's like to ride in one. She's a news reporter and thus not your average person because she has access to Waymo because she's a reporter, but members of the general public are also getting access and are able to do this as well. After watching that, I think a reasonable average person could believe that self driving cars are here.
As was said upthread, the future is here but unevenly distributed.
This entire space has devolved into a false dichotomy between "the singularity is near" and "Third AI Winter is coming" in the popular press. Meanwhile, I'm getting paid to build domain specific chatbots for large companies and couldn't care less about "AGI".
To be fair, none of those are ultimately a matter of "can't" so much as "don't really want to." When we got a billionaire who wanted serious space colonization, we got SpaceX, which is currently working on a moon lander that qualifies as a base in its own right. And of course, we had supersonic jets, there just wasn't enough demand. People want cheap flights more than super-fast flights.
We often get excited about these things, but society usually doesn't. That may be one way in which the current AI boom is different.
They take too long to build and require that thing called engineering which requires blood sweat and tears.
On the other hand, you can scrape the internet, build a model, semi-rig a couple of demos, hype the product and earn money.
Don't forget, it's all about money. Product is just a means to get money.
Building good things and earning good money as a byproduct is a thing of the old. Boomer stuff. Modern people don't do that anymore. It's not efficient, capitalist and quick enough.
I'm enjoying the irony that a product that relies on scraping the Internet will be undone by the flood of Internet content it generates.
I think that's a much more likely outcome than AGI.
Even if AGI is possible it's going to be constrained by the training resources thrown at it. Since most of those have been thoroughly enshittified now, the outcome for intelligent AGI isn't looking good.
Even if AGI hacks into everyone's camera and messages it's still going to turn into a spam+selfie moster superbot, and not Operation Paperclip 2.0.
Why would you need it? All you need is one machine capable enough that it will run it's own alphafold-3 to correctly predict the sequence of an appropriately deadly virus, pay some lab to synthesize it and release it to the atmosphere. Or create a dead man switch to do so, at which point you have to start treating it as north korea, sooner or later it will build the capabilities it needs and release it anyway.
> Where are the moon bases? Supersonic jets? Asteroid mines? Self driving cars?
First three are purely hardware problems, among other things (like profitability). But asking where are the self driving cars in 2023 is a bit disingenuous. Yes, we don't have Level 5 autonomy accessible by everyone for the general public, but we already have Waymo and other cars already driving on public streets, and we do have Tesla with their FSD that some people are using for 90+% of their daily commutes. So yeah, it took longer then some people anticipated (and will still take a bit longer until these systems are widely deployed, bulletproof and affordable), but we're definitely getting there.
This article is absolutely pointless. The statement was from 2009. Back then, neither side had any real evidence in favour of their argument. Deep learning and AI were still purely academic experiments, with zero practicality. And it had been like this for half a century. Back then, even the best models couldn't tell cats from dogs. Today every high schooler with a laptop can solve that problem thanks to abundant hardware and libraries. So both sides were just making up random numbers. This should be a lesson about people, despite their expertise, talking out of their ass instead of reminiscing famous quotes like "horses are better than cars." That statement is also funnily wrong in today's world, but at the time (i.e. without gas stations and roads and infrastructure everywhere), it too was rather reasonable.
The only worthwhile thing it mentions is Ray Kurzweil, then discards it out of hand without actually considering the methodology because it must be unsound to be so precise:
> The few who did predict what ended up happening, notably Ray Kurzweil, made lots of other confident predictions (e.g., the Singularity around 2045) that seemed so absurdly precise as to rule out the possibility that they were using any sound methodology.
The methodology (ignoring all other detailed knowledge) is roughly our progress with computers and software being exponential for its entire history. What lay people don't get about literal exponential growth[0] is how slow it is (vs linear, polynomial) before it 'hockey sticks'. I assume everyone here knows that and the Wikipedia link is only for the pictured graph as reference. If each unit on the x-axis is a decade of computer hardware and software advancements, we may very well be only two ticks away from the singularity.
and that's pretty much all there is to it. It's held up quite well so far and has a good chance of doing so in the future.
It's not really that the increases were slow in the old days but that your old pc going faster didn't really mean much to people whereas overtaking human capabilities may be more of a thing.
Not at all, but maybe you didn't read far enough to get to it.
> The statement was from 2009. Back then, neither side had any real evidence in favour of their argument. Deep learning and AI were still purely academic experiments, with zero practicality.
That's exactly what the article says:
I was explaining that there was no trend you could knowably, reliably project into the future such that you’d end up with human-level AI. And in a sense, I was right.
and then it explicitly states the point:
The trouble, with hindsight, was that I placed the burden of proof only on those saying a dramatic change would happen, not on those saying it wouldn’t. Note that this is the exact same mistake most of the world made with COVID in early 2020.
I would sum up the lesson thus: one must never use radical ignorance as an excuse to default, in practice, to the guess that everything will stay basically the same.
Perhaps you didn't read my comment far enough to get it. The point is - essentially - you can't trust experts in any field when they start arguing beliefs instead of science. It has nothing to do with confidence in technological pacing or lack thereof. It seems the author has learned nothing of his real mistake and would be prone to repeating it in the future.
If people keep bringing up something you wrote in 2009 and called you a dumbass for saying what you said 14 years ago, wouldn't you put out a statement saying they're being dumbasses today?
It's already been common-behaviour altering in many ways. I'm not sure if it counts as civilisation altering, but the way we create art, get creative review, find and consume large documents have suddenly changed. Not for the majority yet, but it's only a matter of time. For the first time we can get a knowledgeable ad-hoc assistant anywhere, immediately. Even if it's been bad in many ways, it's improving almost every day.
> the way we create art, get creative review, find and consume large documents have suddenly changed.
I disagree for your first two points. I do use generative AI to create image for my SWADE/DnD campaigns, but this is not art, this is illustration. There is no story behind the style, there is no story behind the color choice, or the meaning of the portraits.
AI was able to imitate Art since at least imagemagik. Kazimir Malevich's art, at least. Music is the same. Unless you think a DJ mixing up without mods or adding anything is an artist, in this case you might consider AI music art. I consider it entertainment.
Never got anything creative from ChatGPT, even when i was paying for it.
Bing chat (and copilot) is great to replace google, scaffold a lot of my code, and definitely boosted my productivity somewhat. But there's a reason why i don't use it to write my scenarii (still use it to spellcheck and syntax check), and even dialogs i have given up on (faster to prepare it myself in the end).
It already is altering civilization, so I don't understand your doubts. I work for a large unicorn tech and in a single year we went from execs saying "we'll look into this AI thing but we assume it is the usual hype" to creating multiple, successful AI products and encouraging developers to use AI software tools because of the recognition of a significant benefit to the business when they are used. We are at the very start of a logistic curve, and that initial exponential rate can seem like nothing. In a few years, we'll really start to feel it and in a decade or so it the change will become a blur, more like an instantaneous shift to another civilization.
The only person I've seen talk sensible on these topics is Stuart Russell of "Human Compatible" (read the book, it's written with great clarity). He has his head firmly on his shoulders and eschews the breathless hype.
This just seems silly to bother to defend? I followed the link to 'ridiculed on Twitter' expecting it to be horrible, 'oh yeah well I guess if you feel that personally attacked of course you'll think you have to do something'.. but it's not really? It's just someone pointing it out and yeah it's kind of funny in hindsight? Have a laugh and move on?
I decided early on that wasting my one existence trying to solve a problem that seemed unsolvable wasn't what I wanted to do, especially given the stakes (it's very easy to reason yourself into working 14 hour days if you think it means saving the world is 0.001% more likely). However, I never found a satisfying argument against the default outcome being doom.
The closest I ever came to a "solution" is to target the economic forces which lead to people researching and innovating on AI. I eventually found fine-based bounties to be an unusually potent weapon, not only here, but against all kinds of possible grey- and black-urn technologies. I wrote up my thoughts very briefly about a year ago, and they still live at https://andrew-quinn.me/ai-bounties/, but I suspect the argument is fundamentally flawed in a way I as a non-academic don't have time to suss out.
So I find myself in a strange place: Live my life mostly as normal, with a good chunk of my finances in low cost index funds, just in case the exponential starts shooting up - and just in case we don't all perish soon afterwards. C'est la vie.
> He seems to have thought someone was going to discover a revolutionary “key” to AI. That didn’t happen; you might say I was right to be skeptical of it.
Transformer architecture happened, that is key for neural networks to build up context dependent reasoning, we wouldn't have gotten to where we are today without it.
Without it scaling neural networks doesn't add much value, since without being able to separate out contexts they fail to develop a large amount of separate skills within the same model.
So yes, a very important key was found, it wasn't just scaling that got us here.
Transformers are synonymous with scaling as it was the first model that kept improving both by adding more and more layers and throwing more training data at it. All the previous models like CNNs, ResNet, LSTMs etc. hit some depth/training size where no progress was happening anymore as gradient was unreliably flowing down.
Is it impacting hiring decisions? Changes in government policy? etc. If not already, how about in the next 5 years? A majority of the use-cases are not going to be publicly attributed to AI, that however does not mean it isn't having an effect.
> Is it impacting hiring decisions? [...] If not already, how about in the next 5 years?
At least in the field of activity that I work in: No. I honestly don't have the slightest idea how AI could be really helpful for hiring decisions. The only "useful" usage of AI in hiring is to serve as a scapegoat for covering your a... if the hiring decision turned out to be a bad idea ("but the AI said ...").
Most of the current effects are due to AI hype, not due to AI itself. Regulations are trying to get ahead of AI, the reason it tries that is due to AI hype. Same with unions fighting against AI etc.
Maybe that hype turns true, but if all AI development plateaued today and no new improvements were made to it then that AI's impact on the world would be very minimal. A useful tool for professionals, but not much more than that.
I mean, probably, sure, for some (generally poorly run) companies, but, when it comes to it, all 15 of the last two recessions have impacted hiring decisions. Companies, and particularly poorly-run companies, jump at shadows all the time.
My point is that a surprising number of hiring/layoff decisions are essentially made based on recessions which _never actually happen_. Possibly the most spectacular in recent years was in early 2020, when a bunch of tech companies laid off their recruitment staff in prep for the recession which was surely coming... That didn't generally work out well for them.
Don't forget that many of us normalized scraping others' work, getting rid of the license, generating stuff based on these works and labeling them originals or "new", thinking that generative models are pulling these things out of thin air.
Also, same people learnt to get angry on anyone who points out that there's most probably a license violation happened during the process, and these angry people believe that everything is fair use (if you're powerful enough).
> Eliezer himself didn’t believe that staggering advances in AI were going to happen the way they did, by pure scaling of neural networks. He seems to have thought someone was going to discover a revolutionary “key” to AI. That didn’t happen; you might say I was right to be skeptical of it.
It’s interesting so many people in the field thought neural networks would not lead to AI. Back when I started working with them around 2013, I thought we’d have AGI by 2030 (which was an especially quacky view back then that has now become an only slightly quacky view), but I also believed there was absolutely no way neural networks would be the approach that got us there. They seemed like fancy regression or curve interpolators—good perhaps for approximating molecular energies in an efficient way but not capable of having a conversation with me.
The “magic” that I thought neural networks were missing was algorithmic capability. Sure, they are universal function approximators, but that’s a bit of a hack theorem, and I didn’t think there was a realistic way to make use of that mathematical oddity; how could one practically train a neural network to compute a SHA256 hash? In fact, I still don’t think NNs can do that. But what I failed to realize, however, is that perhaps they could write the code to generate the hash. In retrospect, it seems kind of obvious, because of course the human brain can’t compute a hash function either—we just write code to do it as well.
Computer programs today seem to fall into one of two categories:
- “soft”: statistical learning or iterative linear algebra (NNs, SVMs, spectral methods, Monte Carlo, embeddings)
While most processes that occur in the human brain are probably characterizable as “soft”, I thought the uniqueness of human intelligence was due to a small but powerful amount of “hard” processing—and that we required a breakthrough in this area to achieve human-level AI. The release of GPT 3 immediately killed this viewpoint for me.
That said, while I now believe we may be able to achieve AGI using “soft” computation alone, I still think that achieving optimal AGI will require extremely “hard”, algorithmic computation. Optimal AGI would be that which performs better than any other computable algorithm on a very general problem space given some reasonable objective function. It’s quite possible there are many different starting points (non-optimal AIs) for getting there—neural networks being one of them—but as these systems recursively improve themselves, my guess is that they all end up converging on one universal, algorithmic, optimal AGI.
The unintuitive thing about self driving cars is when we drive it’s not just about the source, destination, road lights and signs. That may be the 99% but the last 1% is all sorts of crap on the road that one has to triage in milliseconds whether it’s safe or not to run over.
Pedestrians slight gestures, animals on the road, construction, police, wet cement, pot holes, fallen trees etc.
One needs to have somewhat a human experience and know the physics and behavior of almost all objects, including infering and learning on the fly unknown objects and environments.
Now if you build a system that does that, not only have you solved self driving cars, but robotics itself.
And if you solved robotics, that is inches away from AGI. Only a matter of time before it’s learned all physical trades, and only a matter of time a few humans use an army of robots to take over, or the robots themselves doing it in the goal of self preservation.
So inventing AGI is kind of conditional to L5 self driving and once you solve AGI, it’s a very different unpredictable world.
I fundamentally believe alignment is impossible. Sure you can align AGI to a few powerful humans but something aligned to all humans is very unlikely going to happen.
Scott shouldn’t be hard on himself at all for opinions stated 15 years ago. Even over short time spans of a few years, I change my perspective quite a bit.
In the 1980s I was lucky enough to stumble into the opportunity of serving on a DARPA neural networks tools panel for a year and getting lucky applying a simple backprop model to a show-stopper problem for a bomb detector my company designed and built for the FAA. Bonus time!
Decades went by, and then deep learning really changed my view of AI and technology (I managed a deep learning team at Capital One, and it was a thrill to see DL applications there).
Now I am witnessing the same sort of transformations driven by attention based LLMs. It is interesting how differently people now view pros/cons/dangers/this-shit-will-save-the-world possibilities of AI. Short anecdote: last week my wife and I had friends for dinner. Our friends, a husband and wife team who have decades of success writing and producing content for movies and now streaming. I gave them a fairly deep demo of what ChatGPT Pro could do for fleshing out story ideas, generating images for movie posters and other content, etc. The husband was thrilled at the possibility of now being able to produce some of his previously tabled projects inexpensively. His wife had the opposite view, that this would cause many Hollywood jobs to be lost, and other valid worries. Normally our friends seem aligned in their views, but not in this instance.
> Scott shouldn’t be hard on himself at all for opinions stated 15 years ago. Even over short time spans of a few years, I change my perspective quite a bit.
I've came here to write a similar thing. Fifteen years is a long time, and just as you wrote, I also tend to change my perspective and opinions quite a bit (and sometimes even quite a lot) over much shorter time spans.
And I'm not even talking about tech--but about life in general. In fact, I remember some of my opinions from 5, 10, 15 year ago, and some of them are cringeworthy (to put it rather mildly) to my present self...
Your take is that you’re wrong a lot, so it’s OK others to be wrong? If the stakes are high, we should be learning how to not make mistakes rather than how to make excuses after the fact.
I think you misunderstood what I was saying, so I will try to be more clear:
I was describing myself as an observer and participant in different waves of AI tech. I was not admitting to being wrong, rather, just someone who was lucky enough to be a small part of AI.
What I don’t know right now: if AI will help solve difficult societal/energy/medical/scientific problems and propel humanity into a better future (I bet on this outcome) or that the AI-doomers are correct (not my view).
I try to absorb the AI dangers arguments and keep an open mind. Good resources: https://podcasts.apple.com/us/podcast/your-undivided-attenti... and I also think that privacy advocates (see books Surveillance Capitalism, and Privacy is Power) are useful in deciding what we should and shouldn’t do with AI.
In my purely layman view of someone on the outskirts of AI development, meaning that I knew about the techniques from a basic concept view, saw the rise of ML and NN into the mainstream software development the past 15-20 years (and not from an academic background), and now seeing the latest generative AIs and LLMs.
My biggest anxiety with AI development rapidly increasing in pace and capabilities is that we don't work in a system which will distribute the efficiency gains brought by those developments to people, they will be captured as profit, we don't have anything in place on how to absorb large swaths of workers out of a job when an AI can help to automate 90% of most office jobs.
I really try to not be a Mennonite about it, technology invariably causes splash damage to jobs, from looms to robotic arms in factories, we increase our efficiency to produce but for some reason it feels like an AI boom will bring such a shift much quicker and broader than many past inventions. And I don't think we are prepared for the aftermath of that.
Let's say if in 10-20 years some 50% of white-collar jobs can be made redundant by AI, there would be large swaths of workers with non-marketable skills, how do you retrain so many people so quickly? We already struggle to retrain small pools of the labour force from jobs that are disappearing, like coal miners or factory workers, progress always come at a cost and so what's going to be the cost when that many people are out of jobs, out of prospect of jobs with their skills, and the value added by AI is captured by the few owners of capital?
That's my main anxiety. I have so far embraced AI as a way to make my job less tedious but I started to have this tingling feeling that I will see myself outskilled at my job by AI during my lifetime... And I have no idea what will be left to do. Given that predicting what will come next is nigh impossible, I don't think we will be prepared for the aftermath when it comes, just like we were not prepared for the aftermath of social media and just live with the malaise nowadays.
I worry about this also: how does society function when only half the workers are required? We will need some sort of minimal social safety net that still allows people to do some work and get paid.
I view future AIs as being partners, working with people. That said, much less human labor will be required.
Not really addressing your good comments, but as an aside, I really hope that open AI models “win” rather than opaque AI systems owned and run by 3 or 4 giant tech companies.
I think a lot of his defence here is "No-one could have known exactly what was going to happen and no-one knows what will happen next."
This is obviously true - predicting the future is hard. It doesn't however explain giving a timeline of thousands of years, which is a ridiculously long time for anything you believe to be technically feasible.
Today's AI is simply a great way of indexing already-known information and making it readily accessible via a prompt or an API. That's extremely useful and will yield new levels of productivity, but it's not what we generally think of as being "AI."
The major mistake - some people have been getting this right - is looking at algorithms rather than processing power. The people who were just following the exponential curves [0] pointed out that we'd be hitting human-brain levels of processing power some time around now.
Turns out intelligence isn't complicated; it just needs a lot of brute force matrix math. This blog post can be interpreted as "I didn't realise what it was like living through an exponential process". Things happen very quickly.
"""thereby ensuring that both camps would sneer at me equally."""
In modern polarized environment you have to be hot or cold. In this example you either have to be AI doomer, or not. Nuances are not allowed, you will be voted down by both sides of argument.
People flock together in groups and sneer at one another with superiority.
It has become clear that the defining route to riches this century is to persuade the already powerful of a huge threat to the status quo, then claim that you offer the way to control it, and ideally turn it into a mechanism for further entrenching established interests.
It may have always been this way and I am just getting old.
Making any group of people richer is good business sense as long as those riches doesn't come from you, since then you can take a share of that. That include unions taking a share of worker salaries in order to make those workers make a bigger salary.
Just a small question about that: If we train a generative AI on AI text, is it becoming better, worst, or the same? Because if GenAI2, trained on GenAI1 production, is less useful or as useful as GenAI1, we can be pretty sure it isn't creating anything.