There is no elaboration on why he said that or the context.
> Prince also took shots at the major cloud providers, especially Amazon, during the talk about the infrastructure side of AI. He pointed to the shortage of GPUs as one of his top concerns about both the current infrastructure and lack of competition in AI, but argued this problem is “somewhat artificial.”
I was expecting to see a point of view, reasons, or examples, but the article is just clickbait.
It would seem quite predictable that AI is going to follow the same trend as all disruptive technologies:
* First hundreds of companies rush into the new space
* Then market leaders do emerge
* Then the market consolidates
And in the end, only a few of those hundreds will remain - and most will go under, and in many cases the founders of those that did went under will then - with the power of hindsight - admit, that they never had a plausible shot at becoming market leader in the first place.
Except it's not really a disruptive technology. It's (primarily) a sustaining technology. In Clayton Christensen's original work, disruptive innovations are worse than existing alternatives in the dimensions that existing markets care about, but they're either cheaper (think of microcomputers vs. minicomputers) or have same other attribute that makes them appeal to a new market segment (think of portability for laptops & cell phones, or ease of publishing for social media). This makes them create new market segments among new customers doing new things - think of how crypto's use cases have been DeFi protocols and NFTs, while it's utterly failed as a currency. Sustaining innovations, by contrast, are ones that deliver better performance for an existing customer's existing use cases. Think of faster CPU speed, bigger hard drive capacity, more pixels on a camera. If a technology is better at any use case that people are already making money off of, it's not a disruptive technology, because by definition those are worse at existing use cases but let people do things they haven't been doing before.
There are probably some AI-related disruptive startups out there that will apply it to use-cases that are not currently computerized. But by-and-large, it targets existing use-cases, for the simple reason that you need training data to build the AI in the first place. That gives an advantage to incumbents who have the training data and massive amounts of capital to train the AIs in the first place.
>But by-and-large, it targets existing use-cases, for the simple reason that you need training data to build the AI in the first place.
This doesn't strike me as a true statement. People are embracing the term "AI" to represent recent ML advancements because modern Large Models often generalize to unseen tasks without retraining. The main gold rush so far is in consumer and productivity applications for high cost knowledge work. However for a tech that's only been around for a year... that's to be expected. There will be a next shoe.
Consider the disruption that could emerge in the near future if things like the following occur.
- AI Agents which actually work and generalize across many domains.
- AI Agents which can communicate to solve complex challenges spanning multiple companies/problem domains.
- Large multi-modal models which can be used on stock consumer hardware thanks to hardware/software improvements
- Large Multi-modal models which can effectively operate robotics to complete tasks specified via demonstration/natural language.
- Large Models/Agents which can complete extremely large design tasks in CAD/code spanning multiple days of inference with equivalent performance to an average expert in those fields e.g. 10-100s of millions of tokens in context.
I could go on... In my opinion, we are at best in the 1992 internet phase - or the 1960s of the computer revolution. Those markets both saw large early firms emerge to solve existing problems, then saw those firms displaced as market leaders by subsequent innovations.
The types of processes that AI is shaping up to disrupt are processes that are currently performed by people.
Take, for example, accounting. A $15,000/year subscription to an accounting AI might be slightly worse than a full time $150,000/year human accountant. But as long as it's not $100,000 worse, or breaking laws, then a small company is better off with the AI.
As another example, take plumbing. An experienced plumber could charge $200 for a job. An AI by itself can't replace that plumber, but an inexperienced 21 year old kid assisted by an expert plumbing AI might be able to accomplish the same job at almost the same level of quality and only charge $50. As long as the pipes don't leak, you might be happy to save $150 and hire the kid instead of the guy with 20 years of experience and a family to support.
Neither of those scenarios are certain, but they're within the possibility of what we could see in the next decade. And you could extend these examples to so many other professions.
Both of those examples require licenses in many states. Not to mention an apprenticeship for plumbers, so you’d never get a dumbass kid working from his phone if you expect a permit lol
AI plumbing sounds like an idea that burns money and quickly runs out. Have you ever seen what plumbing looks like across different properties? Or ever talked to a plumber as to what their job entails? It's not nuclear physics, but it's also not legos or software.
The plumber reference is probably a scope for small home jobs such as snaking a clogged toilet, replacing a garbage disposal, replacing a toilet, installing a bidet, etc. Small plumbing jobs may look simple but require attention to detail and I'm sure an AI tool can help with that as opposed to reading fine print on the toilet instructions.
While I understand what you're saying and agree that people processes are a place where the AI value will be captured, the particular cases you've cited are in fields where there are very real legal and/or liability issues related to "mistakes".
yep, the main problem with "automated" bookkeeping or accounting software so far has been that it's so prone to putting things in the wrong category or miscalculating small taxes because it can't read the document Etc that it requires a human to overlook it. a small company ends up paying twice for bookkeeping services then, so it makes more sense to just do it in house.ai and lms will make this work so trivial that it will put a lot of people out of work.
> think of how crypto's use cases have been DeFi protocols and NFTs, while it's utterly failed as a currency
That’s a first world centric view. Many, if not most people in my social circle use crypto for transfers of significant sums and as store of value pretty much all the time.
I'm in Argentina right now, and most of my social circle have immigrated out of Russia to a variety of different countries across the world in the last couple of years.
Also a disruptive innovation. Those are new customers who turn to it because existing incumbents (whether governments or financial institutions) don't serve a specific population.
That's not what "disruptive technology" means though. There's a very specific book [1] that described a specific dynamic to new technology innovations, and you lose a lot of precision by generifying the term.
In the colloquial sense, most workers jobs' get disrupted by garden variety financial innovation anyway. It doesn't really matter whether your job can be done by machine, the PE firm or corporate finance types will just lay you off, leave your job undone, and boost margins so they can unload the stock before customers realize they're not getting anything for their money.
I don't think that we should dogmatically stick to one specific definition for such a generic term like "disruptive technology," especially considering that's not even the first usage of the term -- https://en.wikipedia.org/wiki/Disruptive_innovation
The original paper linked to there describes the same dynamic - innovations originating outside the industry which incumbents can't compete with because they have a different set of customers.
It is but this is part of the hype cycle, "leaders" in the space make proclamations, which in five years if they are correct makes them seen as visionaries.
The reasons and rationale are fairly obvious. He runs a competing business and he’s speaking to a specific customer segment hoping to convince them that his business is efficient and benevolent compared to the megacoros he competes against.
Taking the CEO of a for profits word that their competitors are somehow worse than their own company is the height of credulity. It may very well be true, but the source is so biased as to be entirely unbelievable.
Yes, no substance for why he thinks that, or even exactly what he means by that. It feels more like a tweet than a real story, there's no structure, no direction, no conclusion. It's citing two quotes and adding no analysis on top of it.
- CEO of cloudflare said companies are spending a lot of money and that's probably a worthless investment, and cloud providers are overcharging customers.
- CEO of pager duty answered that employees are finding use cases they want automated and usually that's stuff they don't want to do
- end of story.
It depends. I worked at a large bank on a reporting platform that made it easy to do SQL on data from all sorts of sources (CSV, Excel, JSON, calculations, etc) and use that data as the basis for another report.
We were always fighting for funding. Our director found a team that (somehow!?) was responsible for a million reports and was convinced that this was our golden ticket. When he took it up with his management her preferred option was to switch the million reports off and see if anyone noticed.
If no one is consuming the business reporting it's probably cheaper to have it produced by AI and of the same value.
It's the same short-sighted logic that leads to job seekers using LLMs to convert their experience bullet points into resume prose and hiring managers to use LLMs to summarize resume prose into bullet points.
Or marketers using LLMs to write spam that no one wants to read anyway.
There's a lot of "look at how much more efficiently this garbage can be created!" going around.
Its a pretty normal adoption curve for every couple hundred pets.com you eventually get an Amazon. But you get a lot of losers along the way that get mostly forgotten or consolidated into the final product
> For a straightforward approach, she suggested starting from the ground up with employees and having them create what she calls a “toil bucket,” essentially a list of everything they hate doing that they hope AI could take off their plate.
Talking with customers about their issues appears to be number one everywhere? :)
Most companies taking VC money are doing that, not just AI companies. A startup a friend works at had a company offsite earlier this year, like 50 people, that cost almost $300k for a few days. They rented fancy airbnbs, went on booze cruises, rented out an axe throwing place with full bar, catered multi thousand dollar breakfast/lunches everyday to their offsite venue. They spent one day doing myers brigs tests (LOL). They have lavish perks and benefits for a company with debt, making no money, and have "leadership" that can't even manage a paper bag, let alone a company. When interviewing SWEs, they spend more time on their emotional intelligence and culture fit than technical interviews. The founders are being lectured by a 'head of people' who just runs this playbook at every company they work at, quickly bowing out after 2 years, before destroying the next VC backed startup.
This was most startups back in 2000. Having witnessed it first hand, I can tell it’s an extremely accurate description. So sad it’s still going on. In the 2000s we had “B2B” that VCs threw money at. AI is just the latest acronym.
Ah, the contrast between the wining and dining that one received around 2000 from startups as a computer science undergraduate, and the chase for any boring government contracting job to make loan payments in 2002-2003...
Even if Matt is correct in his statement, that could still be entirely logical and +EV for them.
If you tell me there’s an 89% chance that my experiment with gen-AI will yield nothing, all I need is for the payoff of that experiment to have a net-present-value of 10x my investment in the experiment for it to be an experiment worth considering. (Even less if some of that 89% will inform future experiments, even if there’s no current monetary payback.)
> This only makes sense in a repeatable game or across the industry, not for any individual experiment.
No, because in a large corporation there are a large number of investment projects going on. So a corporation can still treat these as "bets" and invest in those with maximal expected IRR
The worst case of maxing out a personal credit card is not generally comparable to individual death. Loads of people with way less earning power than SWEs have that amount of debt. It's a pain in the ass, but it's not terminal. (It's also dischargeable in bankruptcy in the worst 0.1% case.)
(I’m coming at this as someone who’d way rather an 11% chance at a billion than a 50/50 at a million, which I’m sure colors my thought process.)
The amount of personal investment it takes to get to the point of pitching investors is pretty low (and well lower than “already succeeded at your business” IMO; raising money is not in itself business success).
It was ridiculously low a few years ago; it’s still low and achievable as an investment for most anyone earning a western SWE salary and not living extravagantly.
Isn't this essentially just R&D? The hope is that something meaningful will be produced, but it's not necessarily clear what the result will be or whether it will be viable.
This is only true if you start with "We have to use AI, let's find a problem". There are plenty of production-ready use cases such as retrieval augmented generation (RAG) and generative AI to assist with the creative writing or illustration process.
There are also plenty of ways to spend way more money than necessary. Plenty of great open source solutions that have almost caught up as we end 2023.
Even if that's true, the big money is being spent in the belief that some bigger more valuable thing will be unlocked by future advances, not on better engineering well understood uses. The technology has some value but the behavior around it is still driven by bubble "it will be valuable later" behavior rather than current uses.
I can only speak for the company I work for, but that's more or less exactly the mindset the C-suite has about AI. The CTO nerds out on it so wants to work on it for the sake of saying he works with AI (nothing inherently wrong with that mind you, I can respect a genuine nerd moment like that), and the CEO sees our competitors pushing out AI solutions (aka calling OpenAI's APIs) and demands we "do this AI thing everyone else is doing".
It's just a hypewave of people following eachother blindly without stopping to consider why we need an AI in the first place. We're not doing anything groundbreaking with it, in fact half the features we're shitting out with our AI experiments are things that have existed for a decade at this point, except we can now label it as an AI thing.
Also software development. I would gladly keep on paying for copilot if it just does what it does right now. It makes writing code (the part I find least interesting in software development) just a tad bit less boring.
The path of creative AI has to be the most Ironic story of this whole technological tale.
The AI Utopia that was imaged was a world of AI robots all doing our labor so everyone can sit around being "creative" and artist, no on would have to clean the streets, pickup trash, work in that factory etc... We would have a utopia where humans would just be free loving creatives all sitting around painting, writing their great novels, etc...
The reality it, Humans will doing that manual labor for decades to come has AI replaces all creative jobs in a much quicker time...
The idea that higher intelligence has only been around 200,000 or so years and is far less refined than motor intelligence which has a good 500 million years of encoded information.
That is indeed interesting. For years, there was talk of manufacturing, trade jobs and "hands-on" work going away. 4 year college degrees were the only answer. Interesting that it might be the opposite in the end.
Lot of the upcoming automation is going to be on the soft skill side. Those with kids now need to reset how they think of trade jobs and hands on work. It should be an option that is considered and certainly not frowned upon.
The problem is we have spent at least 3 generations looking down on physical labor, first world societies are in for some bad times. For me it looks like those bad times will get really bad in about 20 years, right when I should on a beach enjoying retirement, instead who knows....
Utopia is not possible in the first place, that is what I find amusing in this whole story... the number of people that believe it is.
Self Interest is core to the human experience people deny that at their own peril, it is why I believe in capitalism and reject socialism, i understand that all persons at all times work in their self interest anyone that claims their are not is lying to themselves.
> but at least AI has actual use cases and actually will solve problems.
Maybe some time in the future but the current generative "AI" can not (let me repeat: can not) solve anything. I am sitting here astonished watching the entire world lose their minds over a stochastic parrot.
Did y'all not learn anything from the crypto bubble? Mostly it's the same hustlers, ChatGPT got rushed to the market the moment the crypto craze sort of ended with the collapse of FTX. Someone is making a lot of money here... but it ain't you.
> Maybe some time in the future but the current generative "AI" can not (let me repeat: can not) solve anything.
I know for a fact that it is putting marketing copywriters out of jobs.
I used ChatGPT 3 times yesterday to generate code snippets that I could have figured out myself but took a couple of prompts using ChatGPT.
I've used ChatGPT dozens of times in my D&D campaign for things like "generate a bunch of random dwarves with names and occupations, don't use Tolkien names"
People who say it "can not solve anything" are delusional.
What? So many people are using AI to solve real problems right now.
GitHub Copilot is a good example. It may just be a glorified autocomplete, but it's really good and sometimes predicts 20 line functions exactly as I'd have written them.
Every big company out there is burning most of its operating expenses searching for the next big thing. Even if throwing a little bit of it into "AI" doesn't get any immediate returns, it's almost irresponsible not to take that gamble considering how large the upside is.
Developing, training, retraining, and delivering a competitive AI service today requires an army of expensive people and a huge investment in infrastructure.
It's not cheap!
Moreover, any improvement, whether it's better generation, longer context, a new modality, cheaper prices, or something else, is quickly replicated by all major groups.
They're in a race to see who can spend the most the fastest while charging the least to prevent customers from switching.
Meanwhile, free open-source alternatives keep getting better and more efficient.
To be fair, Cloudflare has also been lighting money on fire for 14 years. Maybe not as extreme as some AI companies, but they have never posted a profit once and just became cash flow positive for the first time in 14 years.
Has Cloudflare been "lighting money on fire"? That would imply that they're chasing some unattainable goal or using their funding unwisely, yet they currently provide an extremely valuable service to a significant portion of the internet and they're rapidly expanding into a well rounded cloud computing platform.
Most of these "AI companies" are a novelty that will never deliver anything useful, much less anything remarkable.
If profit would be reinvested it would be zero and not negative - there is no profit to be reinvested - they need fresh investor blood from time to time ;-)
Until you get so big so fast it's unsustainable and collapses, but not before the sharks who've been getting rich off your stock inflation dump for a pretty penny
> Most companies […] are lighting money on fire - is this not the VC model?
The VC model, like here on HN, is gambling on winners and losers in the start up space. The losers burn VC money, the winners become the unicorns - whatever, bleh
That's so trivial to disprove I don't even know what to say. Replacing just one human with AI saves you hundreds of thousands of dollars per annum even if they're a low-qualified, low-paid one. It's not just salary (with insurance and taxes, health benefits) but also office space, amenities, liabilities.
Humans have no way to compete with what's happening.
Except this is just like in outsourcing, where it seems like pure cost savings, until you need to hire more people to manage the low inconsistent quality from those cheaper workers.
It seems your opinion of outsourcing is outdated by at least a decade or two. Reminds me of this joke in Back to the Future, where young Doc Brown thinks Japanese tech is cheap and shoddy and Marty has to explain how things have changed.
Because if it's like outsourcing, then just like Japan and South Korea, over time China has transformed from "the cheap labor with low inconsistent quality" into the world's engineering hub which surpasses everything domestic in price, scale and quality... all three at the same time. The only reason companies try to diversify right now is because China is becoming ambitious and too powerful, and in case of a US-China conflict, they don't want to be left cut off from all their factories.
So if outsourcing to AI is like outsourcing to China, then we know where things are headed as well. Except faster.
AI doesn't slack when you're not looking at it. While humans do. And teams need tightly knit communication so they can avoid misunderstandings and synchronize together.
So. Shared working spaces optimizes the performance of humans, while AI has no physical existence, but its shared working space is feeding it 50MB file of text holding your entire project history. That's the AI office. Much cheaper.
> AI doesn't slack when you're not looking at it. While humans do.
I have had a different experience.
Some humans slack while nobody is looking; some humans slack even if you look; some humans do not slack. Alternatively, as long as you can measure the output, does it matter if someone slacks?
But let's say humans may and I can live with it, because it is not a thread about remote work.
The strangeness/inconsistency comes from (it would seem) not considering those factors in a business decision regarding humans, but to consider them in a business decision regarding AI.
You mirror this inconsistency in your argument, mentioning the savings of amenities and office rental _only_ in the case of AI. This is not a criticism, but I find this interesting, because I always see this topic expressed in this exact same way.
Sure but only for companies banking on grabbing users before stopping free-tiers or raising prices to match actual GPU/compute costs like MS/OpenAI/GitHub Copilot etc.
The rest are able to deliver great classification, summarisation, extraction, QnA & automated customer care & art(?).
Not surprised but the risk and reward is there. You buy/rent A100s to train open sourced model to generate value, it could go to waste the minute a new model beats you the next day
I work in consulting, my firm is frothing at the mouth over genai and selling it hard. Companies everywhere have to be getting the full court press to implement some form of a genai integration no matter how absurd.
I’ve yet to completely buy in. I’m still learning and experimenting and see some applications where it really does shine but still want to see more before I put my reputation on the line with a client who trusts me.
/my firm was also all in with the metaverse so… color me skeptical on their predictions
OTOH - if enough idiots...er, I mean people with no clue about the actual technology...are getting drunk on the "AI!!!" Kool-Aid, then adding "AI" to your own products can be a least-bad business decision. Doesn't matter whether AI could possibly improve your product, or not. Mentally, you write it off as a trend-chasing marketing expense.
I despise this trend-chasing marketing, and mentally downgrade products or companies that do this. Unfortunately it seems that you are right and too many people buy into it or at least do not mind it.
But that's precisely the goal. The funding / revenue comes from the buzzword. "Doing AI" is peacock feathers.
Saw a news segment yesterday about some "AI augmented" "smart" garment ... some textile that can change colour on demand... and uses "AI" to decide that colour. What that has to do with "AI", I have no idea. But the AI piece (plus "sustainable") was the highlight of the piece.
Point being, it's the buzzword needed in this decade in order to raise attention and funding.
Doesn't mean there's no fire behind the smoke -- intelligent systems ... is how capitalism automate humans out of mechanized processes in order to lower labour costs. There's huge profit in that. At least in the short / medium term.
> Prince also took shots at the major cloud providers, especially Amazon, during the talk about the infrastructure side of AI. He pointed to the shortage of GPUs as one of his top concerns about both the current infrastructure and lack of competition in AI, but argued this problem is “somewhat artificial.”
I was expecting to see a point of view, reasons, or examples, but the article is just clickbait.