I feel like GenAI has become the next "blockchain". Wall Street (or someone else!) is setting some weird expectation for every company to somehow say "we're doing something with GenAI." Doesn't matter what it is they're doing with it, they just have to say the magic word in a press release or blog post.
Apparently, "we're using AI" is a better story than "we're making great products, regardless of the underlying tech."
In my opinion it's significantly more meaningful than blockchain hype. Blockchain doesn't add any value whatsoever for 99% of the use-cases that companies were advertising it for, but AI can immediately and cheaply create output that previously would have been expensive, time-consuming, or impossible.
To me it looks more like the dotcom era than blockchain. Obviously there's a lot of hype and much of it will be vaporware, but I think there's also some really good stuff that's here to stay. Might be hard to tell which is which if it turns into a bubble until it's over thought...
Yeah, I think dotcom era is a much better comparison. There is a lot of actual innovation happening and real products being built, but there's also a lot of companies slapping .com onto their names and pretending they're part of the new wave, and a lot of exciting new companies which don't actually have a business model. I think a lot of the current AI hype is wildly overblown, but there's zero chance that this all goes nowhere and had no lasting effect.
Yeah that’s a good baseline. Though I think it’s pessimistic; quite frankly AI is going to be far more transformative than the internet.
Anyone who thinks this is like blockchain should spend 10 hours coding with GPT-4, and then extrapolate those capabilities out a few years. It’s already adding a huge amount of value and productivity gain for me, though you need to know what to use it for.
Someone commented here somewhere that ChatGPT closed the gap between native and non native speakers.
As a good non native German speaker, the ability to send a piece of text to a good LLM for review and correction is incredible. I do not need my wife to review the text I send to the tax authorities, the LLM is doing it for me (and I am good enough to critically assess the quality of the of the LLM work).
I like this comparison. A ton of dotcom-era stuff was "the same tech, applied to different industries" - that's actually common for a lot of cycles, by nature. And there are going to be winners and losers, but it's likely you have a lot of application-specific winners that emerge (vs JUST OpenAI/MS/whoever) as well as a fleet of also-rans.
This also doesn't at all require ChatGPT to be the first step on a road to self-aware sentient AI or anything else like that - the tools being built today will already disrupt a lot of things enough to allow for new winners and losers.
Yes, and also StackOverflow is massively impacted by ChatGPT. Many developers are already using the latter in place of SO. It makes sense for them to try and keep those developers on board.
This style of AI doesn't generate value, it extracts value. You can deliver a 90% as good product for 10% of the cost. That's great for the person arbitraging the 90% savings, but worse for anyone actually using the service. In this regard it functions primarily as a wealth transfer mechanism, so it ends up like blockchains anyway.
If you want to be reductive, you can say the same about delivering knowledge over the internet.
You get "90%" of the value of the world's best library - access to information - but lose things like a well-organized professional scheme, expert humans to assist you with your queries, and the ability to fill up a table with a bunch of books and reference materials side by side vs what you can fit onto your screen at once.
But you also add some new things - you don't have to travel, multiple people can use it at once, etc - that all basically could be thrown into the "10% the cost" bucket: a company doesn't have to spend to send researchers around, they don't have to wait, etc.
And publishers and artists of course were convinced that the internet would be the death of their current business models, that they were just extractive tools, not creative, and bad for their world; and weren't exactly wrong.
They're both just moving bits around and summarizing more than doing the original research to figure out atomic theory, say. But "10% the cost" - and especially "10% the time required" unlocks A LOT of ability to do more, fancier things. As does bypassing gatekeeping requirements (pay to travel, pay to have marketing copy written, pay/have connections to get access to the right archives, "sponsor" an artist to create stuff personally for you...).
I think ChatGPT is generally bad as a creative agent (vs a carefully used tool) and is gonna result in the internet being full of even more low-value BS than before, but I think it's inaccurate to say that it's not going to unlock anything new. It's just gonna look way different than today's internet.
Is a world with Patreon more or less gate-kept from the perspective of "influencing things to your taste" than one where you had to be rich enough to sponsor a Mozart or such?
Is a world where you can't afford to make your own compositions because you don't have the money to afford the tutelage or the free time to master every single step more or less gate-kept than one where you can play around and learn on free or cheap software tools?
It'll be different but there will still be artists - creation will be open to more people than ever before - and I also don't expect that the power-law function of cultural products and taste will change, so there will still be big winners from those artists able to command $$$.
Sure, what are those ten other words, sub them in where you like. Maybe "barrier of cost of entry" for most of the ones beyond historical wealthy-person-patronage? I'm not too concerned with what specifically to call examples of how things will change.
Overall I believe that this is certainly a disruptive, possibly revolutionary, information systems technology but I don't believe it's more uniquely purely extractive than something like the internet or previous disruptive ones.
It can certainly create cheap output, but it doesn't create correct output. AI is being used for many use cases where correct output should be the requirement (e.g. Quora answers). Blockchain resulted in a lot of fraud, AI is going to result in a lot of misinformation.
I think a lot of people are having trouble distinguishing between the two highly-hyped technologies. So there's a lot of "fighting the last war".
Generative AI, or AI in general, has actual use cases that people can immediately understand (and in fact use already on a daily basis). Not one crypto fan ever made a pitch for cryptocurrency that made any sense if you asked follow-up questions.
Economically, generative AI is massively more useful than blockchain algorithms, and its also a much wider field. Its actually producing useful content right now, in spite of being in a very rough and janky state like its straight out of a rushed research lab.
But its also getting the same slimey, scammy hype that crypo and blockchain got. The space is stuffed with grifters. You can't trust what anyone says about it.
I do believe that we will talk about the AI era in a 'changing the industry' way and not in a 'hot garbage we should have just ignored'.
And we already see the effect and have not even reached any hill.
Its for me the same as with renewable energie and EVs: We have not yet invested that much money into batteries as we could and all the problems we are facing will be gone sooner than later because there is no ceiling currently visible. Just not enough VC for it.
Battery tech started to get much more funding 2022.
AI will defintily change a lot and the progress is astonishing.
I feel like people learned not to trust their gut with blockchain when they saw a lot of people making a lot of money and noise for things they didn’t use and didn’t know why anyone would. But this feels different because I see “ordinary” people uploading their Lensa portraits and talking about how cool silly stories they made chatgpt write were. Things understood as interesting or valuable by everyday people that weren’t feasible 2 years ago now are.
I'm thinking it's because stockholders don't want a Kodak. Spending some money on looking at how AI can be used in the company, it could almost be seen as a defensive move.
Funny side note, I misremembered which company it was and my attempts at googling Canon case study brought up nothing relevant. Trying Chatgpt it said that Canon did fine, so I asked it wether there wasn't some company that did fail, and it reminded me it was Kodak.
GenAi can do useful work, blockchain can idk transfer wealth? have programs that work exactly like every other transactional program, but some of the bits are now on the user computer so that everyone is at risk of losing them? reimplementing banks from first principles, but with no guarantees, while speed running frauds?
now, it's true that everyone is jumping on the bandwagon etc, but the two tech are foundamentally different in usefulness.
It's not about the techs themselves, it's about how investors and shareholders push for the hot new thing to be included in the company, even when it's not relevant (or adversarial to) providing a great product.
That's where I was going with my comment. See this other commenter[1] who quoted StackOverflow's communications:
> Stack Overflow is investing heavily in enhancing the developer experience across our products, using AI and other technology, to get people to solutions faster.
They could have simply said:
> Stack Overflow is investing heavily in enhancing the developer experience across our products to get people to solutions faster.
And it would not have changed the meaning of the message at all, except that we would not have read the magic word.
It's a marketing text, don't expect too much of it. Your simplified version retains the meaning of the original, because neither is actually saying anything. "investing heavily in enhancing the developer experience across our products to get people to solutions faster", if not immediately followed by a list of specific examples, is just manipulative bullshit - it implies a lot, but doesn't actually say much.
Apparently, "we're using AI" is a better story than "we're making great products, regardless of the underlying tech."