This is just one of the many reasons why I don't think I'm as concerned about AI models as people around me are, either for art, text, or coding.
These technologies today are highly subsidized because the people making them want to create a lot of buzz. Yes, as they mature they're going to get better; but I also suspect they're going to become more limited, harder to use, more narrowly targeted, and more expensive to access.
In general, we rarely use new technology to its full potential, because doing so usually closes off commercial opportunities to exploit that technology. The norm is that new technologies are used to a fraction of their capabilities on the market, and are often locked down and purposefully hindered in order to protect market segments that are built around exploiting those technologies. See the ebook market and voice assistants, just off the top of my head.
Given that so much of this AI training/hosting is happening serverside where access controls are easy to add and where AI as a service is the most obvious monetization model, I would not be surprised at all to see AI go in the same direction.
I'm not sure what will happen with ChatGPT -- this is not a prediction of what I think will definitely happen. But I don't think it's impossible that ChatGPT transitions to focusing specifically on being a platform for other businesses, even if that limits its flexibility and the range of content it can produce, and even if that means it gets priced primarily for large businesses rather than for indies/startups.
I think you're right that these models will soon be aggressively locked down and monetized, but I'm still pretty nervous about them. The dynamic of powerful but inaccessible tools is going to lead to a lot more inequality. If AI does start to create a large amount of business value, that value will be almost entirely captured by the wealthiest companies and individuals, and will further cement existing moats.
There's a good chance that these tools will be of limited use to individual creators while being very useful in certain capacities for the Microsoft's of the world. If these companies need to choose between making the world a better place or increasing quarterly profits by a notable amount, I don't think there's much question what they will choose.
insert Cory Doctorow post that I'm too lazy to look up about how the Ludites weren't against technology, they were against the way it was being used against them
Honestly, most of my concern around AI has very little to do with the technology or its effectiveness, partly because I don't think that technological effectiveness is always necessarily the biggest indicator of what technologies will win in a market. I have zero doubt that a company like Adobe or Microsoft would love to become the gatekeeper of what content gets made and how people make it regardless of whether their gatekeeping actually makes it easier for people to create.
Every big company would love to be the middleperson in-between writers/artists/coders and their creative/professional output.
But even there, there's a big difference between AI being a tool used against normal professionals, and AI taking over the world and putting every programmer out of business. I definitely don't mean to dismiss concerns about access, but I think I'm a lot more bearish about the ability of the modern tech industry to pull off commercializing a genuinely useful, important category of tech without immediately hampering it to the point where ordinary people start to notice and where it stops being a great replacement for the thing it was supposed to replace.
It's so weird to me that people don't know that text-davinci-003 and code-davinci-003 exist. OpenAI models available via API and very similar to ChatGPT output in many cases. Also text-davinci-003 was released in a similar time period.
It seems very affordable to me as a startup. I have already coded most of the credits system. https://aidev.codes
I kind of disagree with this, I think ChatGPT is a superior system in terms of pure output. It strikes me as the sort of comparisons people make between Open Source voice recognition (which is great for a lot of stuff) vs the serverside systems (which may be more than you need, but are almost always going to produce better output). Opinion me.
> Available via API
The only way that an API gets monetized is through a subscription model or as a loss leader for other content (ads, etc...). Given that AI chat isn't a great fit for advertising (at least not without making it a lot less useful), I strongly suspect that the API model is going to get more expensive in the future.
Again, just something I suspect. Maybe it will get much cheaper to host, but the optimism people have about that is I think more of a hope than a solid expectation. There are much cheaper things to host than an AI system that don't end up being affordable to individuals and end up being primarily marketed towards businesses.
I mean, we'll see if that's still the case a year or two from now. Maybe I'm wrong and the models will stay very affordable. I could very well be wrong.
It's just not the trend that I usually see in tech, and I don't personally see a lot of differences between AI and other tech products that make me think it's going to be an exception to the general market direction that subscription services usually go.
That's a very good point about things like voice assistant, but I don't think your conclusion is right. We do have voice assistant technology and it's widespread and a lot of people use it daily. It's a wild scifi technology and I wish I could use it, but due to the interest of corporate profit it is not something that can be used without allowing a tech giant to harvest your identity and labor.
When I finally bought a new Pixel and set up GrapheneOS, I've never felt more like I was living in the future. For the first time in my life, I had a truly personal digital assistant. I mean, there's still tracking all over the web and who knows what's really hidden in the Google hardware or whatever. But this is the closest I can reasonably come to actually owning my portable digital life. And it's laughably out of reach for 99%+ of the population of the US, let alone the world (due to cost, specific hardware, knowledge of availability, and the difficulty of installation).
So I don't get to benefit from Google's world-class mapping and navigation. I don't get to use voice-to-text or even keyboard swiping reliably. But most people do, in exchange for their privacy and the value of their labor. And I think that's what we'll see with AI as well; they'll find a way to monetize it by way of deceiving the public into getting used to having access for free, and the power imbalance will g=continue to grow in our society.
> But most people do, in exchange for their privacy and the value of their labor
I don't know. I'm not just talking about privacy -- voice assistants are hindered by compatibility problems between ecosystems, by limited functionality for most non-programming/engineering users, by a lack of common UX between assistants that makes them difficult to use, by a lack of reliability that blocks complex tasks, and by their general obtrusiveness.
My understanding is that usage numbers for voice assistants can be best described as a plurality[0], and that the majority of usage is common tasks like setting timers/reminders or hooking into Spotify. That's definitely not nothing, I wouldn't call voice assistants a failure on that front. But they're a far cry from the revolution in computing UX that they were initially chocked up to be; they continue to be (as far as I can tell) situationally useful tools with a lot of gimmicks tacked on. Other people's millage may vary though, I'm sure they've been transformative for some segment of the population -- but I'm not sure even the most useful features (hands free texting, etc...) are actually what I would call transformative as much as iterative improvements over existing UIs. They didn't take over the world or fundamentally change computing interfaces, in fact we're starting to see movement back towards integrating screens into smart homes now.
Add onto that the difficulties in properly monetizing them and the way that the technology was eventually consolidated into a couple of big competing ecosystems because of the difficulty of building/training them or building ecosystems around them -- these are issues that I suspect prevented the technology from ever being explored to its full potential. We don't know numbers for certain, but signs seem to point towards most voice assistants being at least borderline unprofitable[1]. My feeling is that the people investing in AI-chat models would be unhappy with an outcome that looks like this.
If AI generated content becomes a situationally useful tool that is very helpful in some scenarios but ends up being ignored for most complicated projects/tasks, then yeah, that's not nothing. But it's not really a creative revolution either, especially if it has the same monetization problems. And I suspect the monetization problems may end up being a lot worse, because the data collection and advertising opportunities for ChatGPT seem a lot more limited than they are for search engines or digital organizers, and (I assume) they'll be a lot more expensive per-query to host.
Some of this subjective; it's not like voice assistants are failures. But I feel like if we were to go back to 2014/15 and tell voice assistant advocates that they were going to eventually hit usage among maybe 60% of smartphone users, and would be primarily used as a text-to-speech engine and as a way to set timers, and that the biggest market leaders would still be unprofitable in 2023 -- I think people back then would have regarded that as a pessimistic take about the technology.
These technologies today are highly subsidized because the people making them want to create a lot of buzz. Yes, as they mature they're going to get better; but I also suspect they're going to become more limited, harder to use, more narrowly targeted, and more expensive to access.
In general, we rarely use new technology to its full potential, because doing so usually closes off commercial opportunities to exploit that technology. The norm is that new technologies are used to a fraction of their capabilities on the market, and are often locked down and purposefully hindered in order to protect market segments that are built around exploiting those technologies. See the ebook market and voice assistants, just off the top of my head.
Given that so much of this AI training/hosting is happening serverside where access controls are easy to add and where AI as a service is the most obvious monetization model, I would not be surprised at all to see AI go in the same direction.
I'm not sure what will happen with ChatGPT -- this is not a prediction of what I think will definitely happen. But I don't think it's impossible that ChatGPT transitions to focusing specifically on being a platform for other businesses, even if that limits its flexibility and the range of content it can produce, and even if that means it gets priced primarily for large businesses rather than for indies/startups.