The idea that the use of GPT-3 in a product is not providing a differentiator is true if we are rebottling the output with no value added. Stock photos from Unsplash and software libraries from Github also provides zero competitive advantage - but the possibility of synthesizing a competitive new offering on the back of these is still perfectly possible. Enablers and Differentiators, not the same thing.
The obvious out of the way, I have learned never to underestimate market timing. Having laughed at the IRC for Peeps they called Slack, perhaps the asymmetry of knowing how to use GPT-3 is still a great untapped opportunity.
this exactly is what i have been exploring/writing about. productizing a tech is a whole other discipline than making the tech itself and its not just "heh be good at marketing and distribution lol"
Read this article a few years ago and was glad it was published! It definitely reduced competition judging by the popularity on HN when it first debuted.
We were building our company on the back of GPT-3 and soon sold it for a life-changing amount of money.
So starting a business around GPT-3 ended up being a very good idea :)
Sure, the GP is happy. I'll admit my main interest as an amateur researcher/pundit is the "how much can current AI components be used as elements/inputs to reliable software" rather than "can you make money with this".
I think the interesting thing is how hard it is to combine neural output into something reliable in the real world. It gives clue that some ingredient and not just some level of processing power, might be missing.
Also why would Unbound pay anything more than the cost of an engineer to integrate GPT-3 themselves? What was their thinking or what extras did you bring to the table?
The difference between stable diffusion and GPT-3 is the former is open source, meaning you don't have to pay tributes to one party.
The barrier entry to developing apps on top of Diffusion is higher because you have to setup GPU instances. It's quite expensive, compared to OpenAI's GPT-3 where you can just use their API.
Point 6 of https://opensource.org/osd is "No Discrimination Against Fields of Endeavor: The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research." Attachment A at the bottom of https://github.com/CompVis/stable-diffusion/blob/main/LICENS... is exactly that kind of restriction.
The OSI's definition is less and less useful nowadays as the term has taken a life of its own, just as the meaning of any other piece of language. Sure, something like Stable Diffusion has a few license terms that make it OSD noncompliant, but they aren't nearly onerous enough to make referring to it as "open source" untrue as the term is understood by the vast majority of speakers.
Unpopular opinion. But starting a business around AI is a bad idea -- it's a tool not a business. AI is the "Object Oriented" of our times. It'll end up being something that will be used in our tooling, but I recall all those 90's companies who died miserably basing their whole business model around objects... I feel like AI has the same future.
The Implicit assumption here is that AI in general is one of several possible tools to a given problem. This is true in some cases, but not in others (eg media synthesis, automated transcription/translation/classification, etc.). So I think "starting a business around AI" should imply that AI is a core necessity, not just a chosen tool. Granted, it may be just one tool among many that could be useful, even if it necessary.
NeXT, for one. They didn't exactly die miserably, given their technology is what ultimately still runs Apple's OS software and APIs.
NeXT's business model was based on objects to some extent at least and they were enthusiastically selling object-oriented software as a business feature, rather than a technical matter only software developers care about.
The way he explains it, it makes perfect sense (as one would expect): In an ideal world, an object-oriented software development approach not only allows non-technical people to define requirements but also enables them to compose and build applications from existing components without having to write a single line of code.
I think the model was ultimately pretty successful, with the primary example being the World Wide Web. The biggest reasons for the NeXT's failure was the decline of 68k coupled with rise of Windows IMO.
I used to work for a company a year ago who's initial market was selling object definitions for Medical software (in Delphi). It was all HL7 spec models. They've pivoted to B2B software now rather than marketing to developers, but that was back around the 2000's that they were doing it.
It's not based around AI. It's based around content creation. The AI part is just the means to the end.
These methods allow for easy content creation. It's akin to an industrialization of the mind. We're now currently searching for the best human interface to control the outputs so we can attain the results we want immediately and with high fidelity.
Once the images and sounds in your brain can immediately jump to the screen, you'll see what this has all been about.
I don't see how this is at all comparable to object oriented programming. These techniques solve real business and social needs. They automate entire decades of learning, hours of toil, and free up enormous capital.
While I enjoyed reading this article, I more or less disagree with it.
To me, using large language models like GPT-3 is now fungible architecture component, multi sourced from OpenAI, Hugging Face, etc. For many NLP tasks, not using modern deep learning models in your infrastructure dooms you to writing inferior systems.
Not that I agree with GP, but it’s making a different (and compatible) argument. Using Tech X may not be an advantage, but not using it may be a disadvantage.
I am under the impression that the optimal strategy, if you have technical skills, is to be constantly on the lookout for new shovels or other tools to sell to the prospectors. Build quickly, sell quickly and it doesn't really matter if your idea is truly viable beyond 6 months or so
There's already a user talking about it their success in this very thread, though obviously they are an outlier and probably closer to the prospector than the shovel seller in some ways. I've seen multiple examples on HN and various other communities. Crypto, NFTs, AI, contractor arbitrage, the clients of contractor arbitrage, Spotify track generation, dropshipping, services for social media businesses, ASMR, literally any buzzing hype-train of the past few years.
Come to think of it, asking for examples is probably a bad idea because if you've already heard of it for some significant time, the ship has sailed or is dangerously close to.
Despite the relative proportion of entrepreneurs being so small, the population is so large that there always will a healthy mass of prospectors forming with the next hype-train. Most will fail. There will also be those who go for the shovel-sales route, and they will be very smart about it, but the prospector market is usually going to be proportionately large enough for everyone who will actually attempt to monetize it.
The prospectors are almost certain to spend money in a rush towards the hype. The quality of your product is not necessary as long as there is something marketable there, which is the biggest hurdle. It's not clear if the prospector will find gold. But it's clear that they will need a shovel.
Now, why am I talking about this instead of doing it. Well, I mentioned the technical skills earlier...
Enjoying the article, but feel the "Economies of scale" section makes an incorrect comparison between a Spotify business model to a theoretical business using OpenAI's API. The author suggests that since Spotify pays royalties per song played, getting more users doesn't mean more money for them and then claims a business using GPT-3 would have a similar limitation.
There are a couple things I think is wrong with this. First, depending on the sort of users acquired by Spotify it does directly translate to more earnings. What doesn't scale well for Spotify seems to be how active the subscription-paying users are. To which I mean that a user who listens to 50 songs a day will cost more than a user only listening to 10 since the subscription price is static and common across users despite usage.
That last point is where the author gets the next thing wrong: assuming that services employing GPT-3 will be fixed subscriptions instead of a pay-as-you-go model (like AWS). I am sure there will be business using fixed sub prices that are independent of usage, but we shouldn't assume that there is anything about GPT-3 that makes that more likely or even very different from other cases where fixed subs are used. There will always be some costs per user, be it the raw cost of electricity or cloud infrastructure. GPT-3's API would just be one more cost per request to consider.
When the tech is open sourced, the product managers get to shine. Requiring less engineering skill to pull off something means a wider range of product people (founders, marketers, corporate product managers) get to show their skills in finding product market fit in more niches. Starting a business around Excel was not a bad idea at all. OpenAI is becoming the Fairchild of our time - keeping things closed has triggered an exodus of brains and open source activity that creates the cambrian explosion. If they keep up with their strategy, they would become the marketers of tech that is commercialized by open source instead.
I don't think OpenAI being open source would have made much of a difference. The AI cat has been out of the bag so open source models and tools would have come anyway. Once the idea is out there it's only a matter of time.
The only people keeping some AI secrets secret would be quants and perhaps the NSA.
Also the ArXiv literature explosion is proof of the Cambrian explosion.
Training the damn things though, that's the tricky part. I want to build products on your platform as that's where the money is.
The drive to be differentiated is a way more powerful force than people appreciate. When big oil was deciding what to use as an anti-knocking agent (tetraethyllead or ethanol) the two biggest concerns that tipped the decision in favor of lead were:
1. Ethanol might become a competitor to oil.
2. Lead could be patented, licensed, and used to differentiate their gasoline from competitors.
> Meanwhile, the profits will accrue to the true beneficiaries: 1) the algorithm owners, OpenAI [...]
This seems incorrect to me. The crucial parts have been reimplemented. The weights are their only secret sauce and equally good free replacements are only a matter of time.
I agree, they get replicated fast. But large language models also have a democratising effect even when they are under paid API - they are lowering the barrier to many NLP tasks. They take skills from the internet and repackage them in useful and customised forms. This means the benefit is really being spread around to everyone building on them, they can build in a day what used to take a month or a year. I see them as "open sourcing" all these previously hard to access AI skills.
I see LLMs as part of a wider trend - we used to transmit information orally, then we invented writing, then printing, then media and internet. Now we can transmit the distillation of our whole culture as a model, it can be applied directly to solve problems. It's the next step in the propagation of culture.
Given that it's a 2 yearish old article (although there's no date in the post), I'm wondering if these predictions are proven correct:
> The barrier to entry to developing a viable product gets low for everyone, meaning hundreds of competitors will pop up overnight.
> A lot of founders are going to try to start businesses based on GPT-3, and a lot of money will go into them, and it’s going to be a blood bath.
I'm not really following the AI startup landscape, but I haven't seen a Cambrian explosion of GPT-3 apps, although I noticed a few ones. Blood bath is also way too dramatic. Anyone seen a post where a founder of a heavily GPT-3 based startup cried out how their startup was destroyed because "x"?
It has happened in some cases, eg with all the "generate marketing copy with AI" businesses.
But yeah, in general the article assumes that GPT-3 will have lots of applications that make it super easy to make a useful product with very little extra effort, and that is just not true. Twitter demos are easy, robust and useful products are not.
Seeing Grammarly doing quite well (judging from all the adverts on yt), I can imagine a GPT-3 based editor that improves the user's prosaic output, and I suppose it could be quite popular. Perhaps writing technical documentation can even become fun.
GitHub Copilot (in markdown mode) provides that already. I'm increasingly using it to help write technical documentation and blog posts - it works great.
You can even paste in a chunk of code to give it some hints, start writing about it (with Copilot assistance) and then delete the code later.
submitted this because i have been doing a bunch of research around productized AI businesses (https://lspace.swyx.io/) in preparation for someday pivoting
notable that in the small, this post was "wrong" - Jasper AI went from 0 to $60m/ARR in the 2 years since this post. sure, you could regard them as "winning the lottery", but i'm sure if you asked their bank accounts they wouldn't agree starting a biz around GPT3 is a bad idea :)
well, on a very superficial level, but [this](https://www.youtube.com/c/jarvisai) is real work man and it also involves solid "product thinking" for nontechnical users to hold it right.
http://interiorai.com/ is on the surface just stablediffusion, and sure pieter is an incredible marketer/has huge distribution, but he is doing real product level work to make it more usable for his chosen usecase, and that should not be ignored
Isn't GPT-3 a glorified autocompletion algorithm? Why would anyone want to start a business around it? I understand it's fun to play with it and NLP has come a long way from Eliza, but at the end of the day they aren't that different in the sense that it's not real AI and like Eliza GPT-3 has no understanding about the generated text. Using GPT-3 to provide one functionality of your product is one thing, create a business around it makes no sense IMO.
Alternatively, figure out how to use GPT-3 in a market that involves some schleps. I'm working on one in Education Tech, building something for a need that teachers have been practically begging for. There are particular regulatory challenges, unique sales paths, and a big first to market advantage because educators aren't online constantly researching their options. Once you're embedded in the education consciousness, you're there for years.
But yes, another random thing GUI for generating marketing copy from GPT3 isn't a good long term play.
There was a site that was making 50k USD per month, providing very basic analytics. This was in the early days of Twitter. Even if it lasted only 6 months at 50k, that is 300k , for a side project.
So, if you have a great idea for a business based on GPT-3, don't do it because you might be able to finish it in an afternoon? That seems better than having a great idea with many barriers to entry.
The obvious out of the way, I have learned never to underestimate market timing. Having laughed at the IRC for Peeps they called Slack, perhaps the asymmetry of knowing how to use GPT-3 is still a great untapped opportunity.