Hacker News new | past | comments | ask | show | jobs | submit login
AI Looks Like a Bubble (every.to/napkin-math)
55 points by allenleein on Feb 19, 2023 | hide | past | favorite | 59 comments



This is quite a subtle and powerful analysis of the economic role of AI that seems to have been misunderstood in this thread.

The claim is: AI is not a profitable space to compete in.

Why? 1) Most of the gains are distributed at zero-cost: academic innovation in a model is available to everyone. 2) Zero-cost gains are extremely hard to extract: Business-useful data requires large corps to do massive enterprise change projects to provide it. 3) Profit will lie in between these and incumbents are already the best placed: adding AI to business data systems and apps is best done by the providers which already own/operate those systems.

The intuition here seems sound: AI is not the kind of technology which creates competitive advantages. It apparently does so very early in a business' use of it, but it actually lowers the barrier to competition dramatically. Whatever AI company seems "ahead" now, for the above reasons, it is quite trivial to create a competitor.


I think one could further generalize your argument: the very economic role of all algorithms is misunderstood in this thread.

Maybe if we consider a less loaded class of algorithms e.g., pagerank this would help along the discussion. The adtech business model had little to do with the details of the underlying algos. Technical aspects of search algorithms may have been a catalyst that selected one of the many providers but it took completely unrelated circumstances (market structures, political and regulatory attitudes) to enable and sustain the new economic reality that ensued.

There are many older precedents that point to the very complex relationship of "algorithms" with economic value. Think e.g. trading algorithms or credit scoring or insurance models. The idea that some supergenious geek will develop an approach and "beat the market" is a recurring theme but the reality of applying algorithms to human affairs shows none of that comes to pass.


Interesting point. I wonder if the internet would be an interesting analogy. E.g. during the dot-com bubble, Cisco's stock price rose massively, then crashed -- as I understand, because it wasn't actually well-positioned to profit from the rise of the internet. It's only 20 years later that Cisco stock even approaches its bubble price.


>quite trivial to create a competitor.

I dont get it though, if its that easy why is there no meaningful alternative to openai now? or are you arguing that that will happen later on once its (even more) established?


OpenAI haven't invented anything with a "competitive moat" right?

They've created a product which uses ideas and data that are largely available to everyone. It's not a pharmaceutical-drug type invention.

If they hadnt been bought my MS, their revenues were going to be in the $20/mo subscriptions. And you'd quickly find a lot of people offering $5/mo for 80% of the quality which would be "just fine" for many cases. (Eg., consider copilot).

The valuation of OpenAI is because it was clear a player like MS would buy it, since the marginal economic value of that technology is to a large incumbent who can quickly deploy it "in the places it's needed". It's that ability to deliver "when needed" which enables MS to draw value from it: ie., MS is just capitalising on it's current position with customers.

The AI bit is actually providing a trivial amount of profit over-and-above MS's existing position with customers.


Isn't there a pretty big cost in training these LLMs?


One that prevents you from competing, but not most larger tech companies.

If you think if it like any other product, the picture is clearer.

A startup invents a widget whose production costs are somewhat high, but there's no patents/etc. So all major players could invent their own. Where's the value? Just in the sale to a major player.


I hear you. What do you think of the time/expertise/tooling it takes to train these models (and that's it's an iterative process and openai has a significant head start). This strikes me as more than a 'first to the gate' advantage (but perhaps less than a traditional moat).


> why is there no meaningful alternative to openai now

Alternative to do what, exactly?

Were people asking each other "why is there no meaningful alternative to tulip bulbs?" early in the year 1637...??

Can't help thinking that the people getting rich off gold rushes are often the ones selling shovels...


That isn’t quite fair. Tulip mania was a bubble because it was a greater fool scheme. The only purpose of the commodity was to sell it to a “greater fool” at some time in the future. This is analogous to the crypto bubbles that happen every few years.

OpenAI has created actual products which have some value. Dall-E is useful to people for what it is. Codex has limitations but some people find it useful. GPT3 and ChatGPT are definitely not general purpose problem solvers but for the people who use them they have value.


> OpenAI has created actual products which have some value

Value? One headline from last summer was "OpenAI announces pricing for DALL-E 2: AI images are almost free"[0].

[0] https://the-decoder.com/openai-announces-pricing-for-dall-e-...


Seems to be on its way: https://laion.ai/


> why is there no meaningful alternative to openai now

Is GPT that far ahead of other LLMs?


AI has been a huge bubble ~5 times now at least. Oh, and we prefer the term "AI winter", thank you.

https://en.wikipedia.org/wiki/AI_winter

Failure of AI translation in the 60's, abandonment of connectionism (~95% of deep learning) in the early 70s, government deciding AI is useless (mid-seventies), LISP collapse (mid-eighties), expert systems abandonment, AGAIN failure of deep learning (after seeing CNNs massively improve recognition) in the mid-2000s.

And let's just not talk about how often specific subfields of AI have been in winter. One subfield that is particularly famous (and I did my thesis on, because ... I'm an idiot) is biologically plausible machine learning, or attempting to get closer to biological machine learning or "liquid" machine learning. People have been restarting this field since before my father was born ... and it keeps dying, again and again and again. And hey, I gave it a shot too in my thesis, and I failed just like everyone else (technically I succeeded, in that I managed to get the network working ~3 times, but it certainly did not "wake up" the field).

I guess it's just that people look at biology, and look at combinatorics/statistics ... and feel biology is the easier path forward, both because you don't have to find your own ideas, and it's generally easier (e.g. transformers are 2 big ideas. One, as it turns out, is more or less equivalent to "brainwaves", ie. positional encoding, something nature has used for at least 500000 years. That's not how it was found though. Transformers refuse to train without some serious expected value shenanigans)

Meanwhile I'm convinced most of the progress of AI actually happens during the winters. At this point a lot researchers are let go, and make a lot of progress in the private sector that then spreads across everything. AI, and specifically the more-or-less abandoned CNN networks are the algorithm behind most traffic control systems.

Right now particular kinds of generative AI are definitely making their way into the economy, as is a whole bunch of autonomous robots. That is actually quite incredible. That will expand, by a lot, winter or not.


Here's what I believe the author has missed: much like Amazon for the first (20?) years, they were not interested in making a profit - it would have actually hurt them, and the priority was to reinvest in their tech. This seems to have paid off.

OpenAI is run by Sam Altman, former YC pres and official smart dude. OpenAI's actual business plan has nothing to do with charging monthly subscription fees. It's more like Sam decided YC wasn't designed to be profitable enough, and wanted to make sure he got the whole thing right this time around.

OpenAI will invest $1M and offer early access to next-gen ML models in exchange for 10% equity. If you buy into the premise that a solution built on a highly customized current-gen model will be clobbered by a barely customized next-gen model, then it would seem as though being chosen by OpenAI to build a business with a headstart that nobody else will have access to is literally a license to print money.

In other words, OpenAI gets to select founders for and then automatically own 10% of the next generation of successful startups. If this plays out, I predict that OpenAI will be a FAANG-scale company this decade.

Ignore OpenAI at your peril.


> OpenAI is run by Sam Altman, former YC pres and official smart dude

Sam’s chops as an investor are unassailable. As an operator, less so.

> OpenAI will invest $1M and offer early access to next-gen ML models in exchange for 10% equity

These aren’t super-competitive terms.


I fear that I might have buried my lede.

The "early access to the next versions of the OpenAI models" is the only important detail in this arrangement.

The most obvious signal so far is Bing/Sydney chat freaking out Google so much that they rushed a Bard demo that shit the bed.

$1M may be enough that a quality founding team can get a first iteration out the door without having to distract themselves with fundraising.

Essentially, this makes OpenAI a kingmaker. The $1M is just icing. The cake is being able to launch something that nobody else can.


If your premise is OpenAI has "launch(ed) something that nobody else can" then the whole argument falls apart.

No one else wanted to, because of the PR storms associated with this tech. (Tay for MS, and consider "Google fires software engineer who claims AI chatbot is sentient").

All OpenAI did was a relativity trivial (in competitive terms) software-engineering pass over the system to provide the tool in user-friendly and PR-friendly ways.

The idea that google, apple, etc. etc. couldnt create anything at OpenAI is clearly absurd. The relevant "AI" here is just rube-goldberg varations on a dot-product computed over the entire internet; any one can do it.


As The Dude once said... that's just, like, your opinion, man.

I'm confident that you have reasons for holding the position that you hold, but you're not actually offering anything beyond "if something different is possible in the future, one of the incumbents of today would have done it".

The simple fact is that OpenAI beat today's current players to market in every meaningful dimension, and we will have to agree to disagree that Google/Apple held back for fear of short-term PR blowback.

What is Apple but a company that prides themselves on doing things that everyone freaks out about, like ditching the floppy, CD-ROM, headphone jack?


All the companies in question have developed this technology internally, but found no way of making it PR-releasable. There's plenty of evidence to this effect: since MS/Fb/Gggl either have news stores about their internal products or have released one to massive negative PR.

They found no way of reliably making it non-racist, and reliably making it's "advice", "safe to trust", and so on.

What OpenAI "innovated" was an interface over an ensemble of techniques that managed to (1) provide an interface where "made-up" answers seem acceptable; and (2) severely censor and limit what users could do --- and massive teams of people modifying the system post-release to ensure it was robust to those uses.

A start-up has an asymmetric risk of bad PR compared to a tech giant; hence why they were the first to market. It doesnt matter if people exploit the system (etc.) since there's no brand perception wherein they're expected to somehow have prevented people from doing so.

Compare a small social network app with facebook. "Somehow" facebook is expected to have control over human nature to supernatural degrees, or else, "they're evil". Few other such companies are treated this way.

Likewise, exactly consider the headlines involving chatgpt at bing: https://www.theguardian.com/technology/2023/feb/17/i-want-to...

^ this is what stops tech giants releasing this stuff.

No one wants to be the "facebook of chatbots", somehow responsible for human nature, and somehow expected to create a perfect product which can somehow know everything except racism, sexism,etc.


> idea that google, apple, etc. etc. couldnt create anything at OpenAI is clearly absurd

They could have. But they didn’t.



> this makes OpenAI a kingmaker

Among those dependent on its models, yes. But this is just deeper monetisation. What I’m looking for is the moat.


Could the moat be speed? First mover advantage has proved to be a huge factor in the past. Share of mind, messaging, audience loyalty, eyeballs, etc. ?


First mover is an advantage when there are network effects (eBay, Facebook) less so for other companies like Blackberry.


Blackberry created an industry and made hundreds of billions of dollars before resting on its laurels and getting derailed by internal battles over direction. There are still huge numbers of people that would buy a modern BB device today if they could.

The notion that BB didn't have massive levels of success due to their first-mover advantage is simply wrong. Of course, I would also suggest that BB was a network effects driven company; I wasn't a customer, but they offered the original (mobile) walled-garden group chat system.

Bigger picture, while lots of companies succeed entering crowded markets with new innovations, I also know that if you could somehow offer founders a choice between being first mover and, I dunno, getting 10% of their first round equity back, probably 100% of the founders would opt to capture the market first.


Too bad they want to stifle its potential. It’s already much less useful than it was a couple months ago.


I suppose its more on the hype cycle than a bubble. Yet I don't think we have reached Peak-AI yet -- although LinkedIn is currently packed with posts about ChatGPT and SD.


It's... hype cycle with bubble characteristics.

Obvious comparison is crypto. Unlike crypto I'm more willing to view this in a positive light because there's something real behind all the marketing, however we're starting to get sucked into the bubble-like loop of big promises -> practical applications not living up to those promises -> even bigger promises.

If it's a tech demo, then writing sonnets is impressive and cool. But when you're promising to disrupt entire industries, "hey, I can write sonnets" is not going to cut it.


Well, I jumped headfirst into crypto in 2019. Now looking back I haven't found a technology that has any real-world implications. I have also been working on AI products since 2020 and with the newest breakthroughs we have seen really useful applications. GPT3 as an equity analyst: https://app.finclout.io/tp/MSFT Matt explaining current ratio: :https://www.tiktok.com/@materialimpacts/video/72016630168390...

using a combination of ChatGPT, Stable Diffusion, Eleven Labs, and D-ID. We can already see that this significantly improves turnaround times in content creation with direct implications on digital marketing.


Sonnets by themselves, absolutely agree.

Sonnets as an unprogrammed free extra? Interesting, at least.

I’m not sure how many stops there are between this and the end of intellectual labour. For example, Go AI went from mediocre to superhuman much faster than many expected, while self driving cars have been much slower than (I at least) expected.


The language models have been around for many years now, if they would go to being superhuman very quickly they would already be there. We are already in the right side of the S curve, many big companies have made similar language models and they all end up at similar performance levels with respect to coding, language, answering questions etc. We will see improvements but they will be very incremental, like GPT-3 to ChatGPT was barely noticeable and took years. Extrapolating that and we will get language models that can be moderately helpful to a human professional, but they wont be good enough to automate much at all.

For the AI hype to pay off we would need all this buzz and activity to result in another new revolution similar to transformers. But without it it will go the same way as self driving cars, barely being good enough to do stuff, but with tons of works and checks it might lead to some automation at scale that could threaten jobs in a decade.


> We will see improvements but they will be very incremental, like GPT-3 to ChatGPT was barely noticeable and took years.

This was not my experience having tried GPT-3 before ChatGPT came out; and likewise people who have tried Bing Chat report it seems to be significantly more capable than ChatGPT.

Also, while it is clearly limited in competence, for breadth of information and speed and price it is clearly already superhuman. If that's useful or not remains to be seen, as the self driving car analogy is that the reaction times being effectively instant doesn't help when the perception says "what stationary truck?"

I kinda hope this won't be taking over for a decade, and you're not the only person who is suggesting we need multiple new breakthroughs on par with transformer models, but as the saying goes, predicting is hard, especially about the future.


Either way GPT-2 to GPT-3 was a much bigger step and happened much quicker. What we see now are already after much fine tuning, testing with humans, data filtering etc, more of that will result in smaller and smaller improvements. There is so much money spent on these models that they have already tried so many things, so getting to the edge of the S curve happens much quicker.


It's getting there though. One VC that I'm intimately familiar with during their last LP meeting made the point of 'only looking at companies that do AI'.


Could you make an intro ;-)


Lol, line up ;)


People really have short memories and thats not a good position from which to understand anything that unfolds over more than a day's worth attention span.

Remember just a few years ago Jack Ma urging everybody to become artists and philosophers because "AI will eat the world"? AI was pushed relentlessly, we got "ethical AI", and "explainable AI" and androids doing the circles impressing the masses. Needles to say things didnt work out as planned. AI did not bring the next industrial revolution.

Fast forward today and every little technical breakthrough is milked to blow some new life in that dead horse. Its most likely a desperate search of early investors to cash out before the next AI winter.

Sure, the tech is cool and useful. But it is also limited and risky and fragile. And very fundamentally: an algorithm does not make a business model.


>next AI winter

Oh? And why would that happen?


If you read my comment rather than have an already entrenched position the message is that is has already happened.


As always, in all areas, there is noise, but there is also incredible music being played.

I don't think there's a bubble, but rather the beginning of a change. Although I agree with the author: potential products with market fit could take a few years to permeate the traditional consumer base.

In addition, I disagree with the author in seeing how the market behaves to see if there is a potential "bubble" (in this particular case).

I think the technology we are talking about right here, like in any other revolution, might shake a few trees a bit but in the end, we will see a brave new world.


I’ve followed AI for 25 years, but it’s only in the last few months the potential has really come alive. Chat GPT and Midjourney are jaw dropping.

Every CEO and CIO worth their salary must be looking at how to incorporate this stuff into their business. And we can say that with a straight face unlike some of the other technologies which the tech industry has pushed.

So C3 stock price mooning I can understand as they are first to a super hot market (AI in large enterprise) which has grown in size overnight.


> Every CEO and CIO worth their salary must be looking at how to incorporate this stuff into their business

Those of us old enough to remember recall exactly that phrase said about other stuff, too.

> And we can say that with a straight face unlike some of the other technologies which the tech industry has pushed.

Those of us old enough to remember think this might be exactly like other technologies which boomed ... and then busted.

As John Naughton wrote in The Guardian a month ago[0], "if we know anything from history, it is that we generally overestimate the short-term impact of new communication technologies, while grossly underestimating their long-term implications. So it was with print, movies, broadcast radio and television and the internet."

[0] https://www.theguardian.com/commentisfree/2023/jan/07/chatgp...


This is how it was with blockchain: a solution in search of a problem.

What the advances in AI give us are new capabilities. Ways to solve existing problems much better than before. Smart companies will reassess their old ways to see if they can’t be made better.

As an example, Whisper can and should be used in automated phone systems. Understanding what customers are saying will absolutely make the product better.


> As an example, Whisper can and should be used in automated phone systems. Understanding what customers are saying will absolutely make the product better

Ever since early voice prompt automated phone systems, we've had data that customers hate automated phone systems. Customers like talking to humans who can actually listen and are empowered to provide assistance and resolve problems.

This is why when you earn "elite" status with airlines or hotels one of the most valued perks is typically some kind of special phone number to call which (who'd have thought it?) gets you through rapidly to an actual human being who is (who'd have thought it?) empowered to assist you, without having to go through all the usual voice prompts and endless waiting.

I'd bet a beer that a majority of customers will turn out to hate "AI automation" just as much as they already hate other automation.


I'm with you on this one. The people that figure out how to leverage it as a force/productivity multiplier is going to make big gains in the short run.

In the long run...no way to know but right now for CTOs it's trying to guess which positions are buggy whip manufacturers and optimize them out before others.

My gut is that AI tech will make marginal skilled works rank up to skilled to very skilled.

The gap between rockstar and average worker will close slightly, bring a more even distribution.


Well it depends on the business model. I don't see ChatGPT making lending decisions at this point. Firstly, because all of this information will be shared back into the model and secondly, from a compliance perspective credit decisions must be explainable. The summary functionality looks interesting though. How would one go about in measuring its accuracy?


I wonder if C3's contracts with large enterprise customers are the value? As in, it takes a heck of a lot of work to get an enterprise customer signed up for anything, and these guys have done it, so all you then need to do is buy them so that you can actually pour the tech into the contract? Point being they don't need to have anything special in themselves, in terms of AI product.

As for the rest of the AI investment idea, the question is where does the surplus accrue? Does it land at OpenAI? Does it make devs more productive and land in their pockets? Or does it land at their employers? This is the unobvious thing about every innovation that one needs to tackle.


The technologies behind stable diffusion and chatGPT have very high potential. I think this author's take is very wrong.


Thats not the point the article is trying to make:

> AI will change the world, it will make us question what it means to be alive, and there is a chance it will make us a multi-planetary species. But I’m not convinced that just being a company that sells AI will deliver judicious year-over-year returns.


Having potential and having your stock rise just because you slapped AI in your product are two different things.


> Some snarky hedge-fund analyst probably thought that the sports-bra-ification of AI would happen

im sorry, what? i dont understand the reference to sports bras.


A bra company mentioned blockchain and its stock price rose. It's in the article.


There were people who said the Internet was a fad.


And there were people who thought 3D displays would be ubiquitous, or that self driving cars will have replaced at least professional drivers by 2025, or that Google Glass was the next iPhone.

It's easy to look back and laugh at those that mispredicted, but you'll find all possible predictions being made for every technology, so you'll always have winners and losers.


All of those Uber but for X type ventures coming out of it maybe, but as a technology I doubt it is a bubble.


It seems to me that after web3 collapsed, VC is looking for a new balloon to pump and cash out on.


If there was no censorship it wouldn’t be a bubble.


Agree. It's becoming a bubble because they're strangling their own product while pretending it saves humanity to do so.

Which is why it wont pop. Because it's actually useful to many private orgs. It's just dangerous for the public to use if you're anything but pro-free speech.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: