Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenAI predicted to generate over $1B (the-decoder.com)
117 points by thm on Aug 30, 2023 | hide | past | favorite | 138 comments


OpenAI's investor pitch a year ago was that they would make $1B in revenue in 2024, so it seems they are exactly on track. Not sure what the shatters bit is about, especially given that the submission strangely validates it by comparing with 2021 revenue.


I guess they left out the part about it costing $4 billion dollars to achieve it:) (FYI: I made up that number for dramatic effect)


I mean, if their inference costs are 700k per day, we are at approx 256M per annum. Which puts that part of the operational cost maybe 1/8th of revenue.

So where did the other 4bn go?


Training and salaries


Hey - For the record - I just made that number up.


Too late - it's already part of the training data.


Now that you mention it, maybe incorrect training data has adversarial advantages? Boosting the ability to separate the wheat from the chaff.


4 gazillion bijillions


You might want to make that clear by editing the original post.


> Training and salaries

Defintely not, take a look at the dark part of re-enforced learning [0]; this is a digital sweatshop these mega corps are exploiting to refine their algos.

I was going to try GPT4 today to see if it is really that big a of a leap from 3.5/ChatGPT, but to be honest this announcement makes me think that their revenue stream isn't from the paid subscriptions, it was down and while people complained it wasn't anywhere near what I'd expect for such a wide-scale outage.

I still cannot piece together how they are making any revenue if thier overhead is as 'eye-watering' as Altman made it out to be. Surely the M$ deal helped with that, but still where is this money coming from and is this just more smoke and mirrors to keep the hype up?

0: Training and salaries



Cap-Ex


I would guess most people expect those numbers to be heavily inflated and mentally adjust expectations. If a company hits their "expected" numbers like this, as an investor, I'd consider my expectations shattered.


Their real issue is can they defend their position? I would say not likely. Google has a web scraping infrastructure in place and invented the technology. Facebook also has the capability. M$ will likely try to buy them or partner long enough to develop the tech and be a competitor. This isn't the car industry where it literally takes industry leaders years to figure out that electric cars are really a thing.

I think Google is the likely winner since this will definitely enhance search and be a service to sell to large multinationals.


What do you mean by, "M$ will likely try to buy them"?

My understanding is that Microsoft owns a 49% stake of OpenAI and gets 75% of their profits until Microsoft recoups their investment.

Are you saying you feel that OpenAI will not find a path to profitability and will sell their remaining stake to Microsoft?


I was unaware of that. Interesting - "Profits" is an interesting term. Movies never make "Profits" despite "Grossing" billions.

I was just extrapolating from observing M$ behavior for several decades. M$ to my knowledge has never been the sharing type. They will control it either by buying it outright and owning or they will develop their own version still owning and controlling. They will weave the functionality into their Office products. They would likely consider it suicide to depend on an outside party to supply a key tech for their cash cow.


They already have woven it into their office products. ChatGPT is licensed out through Office365 as a sort of "co-pilot for Office" at $30/mo. It's currently trialing with ~600 companies in their early access program.

price: https://blogs.microsoft.com/blog/2023/07/18/furthering-our-a....

user count: https://www.microsoft.com/en-us/microsoft-365/blog/2023/05/0...


Probably as an acquihire? For MS, it makes hugely sense to buy an industry leading AI org at $100B, or even more than that. 75% of OpenAI's profit is very small compared to what they'll get by this hypothetical acquisition.


Tbh I don't think OpenAI would sell for 100bb unless they are seriously strapped for cash and can't fundraise any other way.

Given their current revenue projections I would be surprised if that happened


They spent $8 billion to buy Skype in 2011.


To be clear: I think Microsoft would gladly pay 100bb to acquire OpenAI (although I think that one would give the lawyers a headache given the weird corporate structure)

I don't think OpenAI would sell though

The top people at OpenAI are all ideologically motivated


I am with you. It is not hard to imagine that either 1) Google (et al) swoops in with a massively superior model and cost structure or 2) Facebook (et al) commoditizes this space with an open source model like Llama 2 that brings this down to essentially barely break-even infrastructure hosting.

In any case, it does look like Microsoft is going to eat the $10B.


I don't understand where Amazon is in this game. They are the biggest cloud provider. Azure is offering first class pretrained ml model hosting (content moderation, OCR, OpenAI, etc.).

Google has vector search SCANN library and hosted vector db offerings.

AWS is nowhere to be seen.


https://aws.amazon.com/bedrock/

https://aws.amazon.com/bedrock/titan/

I'd expect a bunch of announcements at re:invent in November as well. They often hold off on big stuff for re:invent.


Yeah, I guess. I feel like google and FB have been releasing interesting ML models every month now.

I’m skeptical Amazon can be like apple and drop everything at once. I find many of their announcements are lost in their own noise.


AWS offers optimized inference and custom infrastructure (https://aws.amazon.com/machine-learning/inferentia/).


Does Amazon have enough GPUs?



I don’t think Google is a good competitor to OpenAI as a quality AI product is harmful to their search/ad business.

OpenAI is perfectly happy killing the search/ad business because they aren’t in that market. They are happy to replace a $75B search market with a $20B AI chat market. Google not so much.


My Google results already have an AI response integrated, similar to the knowledge graph answers. So seemingly they are head on competing; no reason they can't still display ads next to that.


Would you still use it though if there's a better AI answer from someone else and there's isn't a confusingly placed ad next to it?


Why do you think it is "harmful to their search/ad business"?

I see as enhancement. It will not be long before Word and Google Docs is using AI to help people write copy, correct, and translate documents. Those are just the ones I can think of and I'm not that smart. Do you really think either Google or M$ will allow a 3rd party to control something that is integrated into their products? Only companies that are out of business do things like that.


Because there are less opportunities to show ads with a chatbot, especially if its voice based. From a user perspective it’s an enhancement. From a revenue perspective, it’s harmful


Except chatgpt isn't really a google replacement. Not only is the information routinely outdated, people are starting to finally realize just how often it's either blatantly or subtly wrong.

Google is much better positioned to have AI enhanced search.


What makes them much better than MS, with Bing, especially when a small company like perplexity.ai is providing something that's years ahead of Google?


SEO is slowly killing its search business and general AI is probably the only meaningful long-term solution to that threat. Compared to that, cannibalization inside Google is a more manageable problem.


Lol if they're doing 1 billion next year I bet the market is at least as big as search


Bing injects ads into their responses ?


My google search results now have a section on the top that an LLM generates along with the rest of the results. I don’t think they have to be separate.


I think there will always be multiple large players that will compete on price, availability, performance, “smartness”, etc. Especially since GPU resources are finite and there is explosive demand.

I don’t see how you think Google could be a better service for multinationals than Microsoft, who extensively uses OpenAI models. Microsoft is the largest producer of enterprise software.


Only speculating - the point is there are multiple resource rich competitors to OpenAI. Although OpenAI appears to be having early success that may not last given the strength of the competition and the degree to which those competitors believe the technology is key to their survival. Of course there are plenty of things I don't know and are not privy to.


Sure, there will certainly be competitors. But even if OpenAI loses market share, their revenues could go up dramatically due to the increasing size of the market.

So the doom and gloom talk about OpenAI having no moat seems very overblown to me. They don’t have to maintain absolute dominance in their model. They just have to have a competitive product.


I agree, but Facebook isn't likely to build a competing product. They don't have the necessary data, and much of the data they do have can't be used to train large language models (LLMs). This is something I've heard from friends who work at Facebook AI.


Why wouldn't corporations and users just stick to the original AI chat model that gave them the best results the first time? I think that people are lazy about switching because it's very difficult to personally validate whether something is better or worse quickly/robustly, and when you do get bad responses it's really annoying and makes you churn back to the original model that you used.

I do sometimes wonder whether they could be replaced with a free model running on a user's local machine, however, on the other hand we might always wish to pay for best-in-class (according to some evaluation criteria, that we eventually give up on trying to self-evaluate).


We've seen this story with all kinds of technologies in the past (Symbian and Blackberry -> iOS and Android. Yahoo and Altavista -> Google. Netscape -> IE, IE -> Chrome. Lotus/WordPerfect -> Word/Excel.)

Most potential users are not actually yet a users of any products in this space.

Also while nobody will just spontaneously switch to an equally good or worse product, transformative improvements can and do happen.

Finally, users can be swayed by discounts and marketing to switch to an equally good or only slighly better product.


While I’m sure that once we reach the good enough stage, people will be too lazy to switch. But I don’t think we are there yet. Especially if another one can make you save lots of time.

It’s like the beginning of search engines. Altavista. Yahoo. Lycos. There was a lot going on but none of them were good enough. Once google showed up, people switched permanently and rarely switched back. But that took years. And also that took billions of dollars to buy the default search box (creating a whole new browser or paying apple for the iOS search bar)


There’s a problem that Google is too big already and that at least some people now have trust issues with them and would prefer to get their AI from a different provider than the search and email services.


i hate wrapper articles like this that dont add any value apart from picking the most bombastic headline. original The Information source: https://www.theinformation.com/articles/openai-passes-1-bill...


Hacker News as The Information has a banned domain due to its paywall: https://news.ycombinator.com/from?site=theinformation.com

Submitting from a secondary source is a workaround to the issue.


hmm didnt know that. thanks. seems strange not to give credit to the people that did the hard work of breaking the news, and incentivizes grifty wrapper articles like the semianalysis GPT4 one where a twitter grifter just straight up recycled the paid content and put it out for free (and had the audacity to chargeback his credit card lol)


Which one is it now? OpenAI spends one gazillion a day on inference and will go bankrupt in 2 weeks or it's making billions


Perhaps read past the headline to discover some awesome new details, like that it's talking about revenue, not profit?


I think comment OP was referring to the viral news piece that came out just a few weeks ago that stated that OpenAI's expenses were $700K a day and it would go bankrupt by 2024.

https://www.wionews.com/science/openai-may-go-bankrupt-by-en...


That is also literally in the headline


Why not both?


Can’t remember the source, but I read that their daily server burn is something like $700k. If their monthly revenue is 80M then that’s a pretty nice piece of profit.


The server burn will have likely scaled significantly as traffic scaled. There's not much in the way of economies of scale here, you just need to rent/buy all the hardware.


The cost scaling coefficient is likely < 1 as the increase probably allows them to pack better for inference and reduce their costs.


Given their scale I assume they basically got the best performance out of this packing a year ago or more. This sort of improvement has quickly diminishing returns. 1m DAU to 100m DAUs probably doesn't do a huge amount to this.

On the other hand, given the hype, I'd expect that their free usage has grown much faster than their paid usage, at least for ChatGPT. I suspect their costs have therefore gone up faster than their revenue. API revenue might be the counter to that, there isn't free API usage (that I'm aware of?) so all that increase in usage is likely beneficial.

Taking a step back though, it's uncommon for this sort of tech company to suddenly jump to profitability in the space of a year. AI is big, but so were many trends before it. Sam Altman is also experienced here, likely a good fit for CEO, but he comes from the SV VC world, he's not going to be optimising for early profitability because that's not the way he thinks or what he cares about. OpenAI could raise a ton of cash on good terms so why bother optimising for profitability and stifling growth.


Who actually pays that bill? Since MS owns half of openai, and openai is using MS compute...


I don't think MS owns anywhere near half of OpenAI.

Most likely though, OpenAI pays. Some of that would be with money MS gave them originally, but that's kinda the point of the MS investment, they do it based on OpenAI funnelling some of that back into MS.


Speaking of: In the past few weeks it feels like GitHub's/OpenAI's/Microsoft's Copilot has gotten dramatically more powerful - like it's including a larger code context when doing the suggestions. Does this resonate with other people's experiences?

(Could be random based on what I've happened to work on lately though.)


Yes, they expanded the context window, announced two days ago: https://github.blog/changelog/2023-08-28-github-copilot-augu...

I'm sure they incrementally rolled this out so it's possible you are in a cohort that got this change earlier.


Thanks!

"We’ve officially rolled out the 8k context window for all code completion requests!"

It's odd that they don't lead with that.


I certainly believe $1B rev (and am not surprised!), but what about P&L? :) Everyone knows their servers are running hot!


Is there any profit margin on that? Or is it the typical VC-backed selling dollars for quarters?


We'd need to know acquisition costs and lifetime value (difficult to project with such a new product and with Google and Facebook coming up with potential competitors that might cause users to churn). The idea of looking at quarterly P&L and determining if a business is legit or not doesn't really make sense. Are they profitably buying growth (if acquisition costs are lower than LTV, you're buying future profits that don't show up on that quarterly statement) or just burning money? We'd also want to understand if there's any value to subsidizing things. Perhaps they're able to fine tune and improve based on what users are doing and that's helping them build a moat - or maybe there's no value there.


Based on what we’ve seen acquisition costs are likely extremely low. This is an “if you build it they will come” situation and they built it.

Lifetime value they don’t even know at this point — the best we could hope for is an early (and probably still seasonal) assessment of churn.


Well part of acquisition costs are giving out 3.5 for free hoping to convert people to paid ChatGPT. It's possible they're even subsidizing the paid ChatGPT for additional data (they believe it helps them improve their product and build a moat) and to convert people to Enterprise. For all we know, all their current revenue is considered acquisition costs for Enterprise use, where they might think the real money and profits are.

I understand they probably don't have any solid idea on LTV yet. The point is that looking at quarterly numbers doesn't tell us if any VC backed company is lighting cash on fire or making smart investments.


Fair points.


If Uber could take 14 years to turn a profit, why can't OpenAI get a similarly long leash?


Interest rates.


They are not borrowing from a bank.

Microsoft can fund them interest free as long as they want if there's a slim chance of OpenAI being useful to MS for one upping Google a few more times.


I wish I could figure out how to be a beneficiary of a trillionaire pissing match.


Bet on the winner and buy stocks?


Because interest rates were in a completely different league by then.


Completely different industry and business model.


Because we are all hopefully wiser now after that debacle


I tend to think that the real debacle is that people think it is a debacle whenever a business takes a long time to reach profitability.

I bet a lot of great innovation has been held back by the focus on short term returns.


the scaling effects and deflationary nature of technology are much better-established than those of humans driving large, expensive cars.


You didn't play monopoly much growing I guess. Winning in capitalism is all about securing position. From there, leveraging profits isnt hard.


Three months ago, I already predicted similar numbers for what they pay Microsoft: https://news.ycombinator.com/item?id=36288058#36291960 based on public information from Similarweb.

This new prediction shatters nothing.


I mean they’re getting like… (checks console) $40k/mo from me at this point…

So I believe it.


Can you tell us what you're working on?


I was playing with their API for half day for code generation and the cost was few dimes. I was writing package for making matplotlib plots. https://github.com/mljar/plotai

Are you building some public service that heavily relay on OpenAI API?


This is like 450M tokens/month, assuming GPT-4 32K, averaging I/O to $0.09/1k tokens.


I still remember some dork on Twitter claiming that a source within the company told him that they had already trained GPT-5. Mind you, this was back in April I think. Kind of reluctant at this point to believe anyone who spreads rumors about the inner machinations of OpenAI.


You mean a whole 3 months ago? So they still might be working on it.

Does anyone else remember when decades were a long time ago rather than months? Or am I just very old...


this is actually plausible, though. GPT-4 finished training in August 2022 and was released March 2023. OpenAI took a lot of time to test/secure/fine-tune/whatever. if they're looking for a roughly yearly release cadence then they might have just about finished training.


So am I to believe some tech figure on Twitter with a large following who has an unnamed source? Or Sam Altman, who himself has said that no work whatsoever has been done nor will be done on GPT-5 in response to the rampant speculation this single tweet created?


The ability to replace Siri with chatGPT would 10x the usefulness of my phone.

If Apple is ever smart enough to add this feature - can’t wait!


I think they are going to. But In Apple fashion, it will probably be pretty limited ability to start with. Maybe will only allow certain strict actions to be performed on the phone.

I don’t think you’d want more anyway. It’s not like you’ll want it to output Python code. For that you can use an app that specializes and has the UI for that task.

I have a hard time believing chatGPT would x10 your phone unless the UI of Siri would also change


Fwiw, I use the ChatGPT app

I record voice memos (it generates a transcript using whisper) At the end of the memo I can: * Ask it to summarize the memo into bullet points * Answer a specific question

It's incredibly valuable and I've already used it more than Siri


If Siri would just respond to me at the same volume level, that would make her 10x more useful to me. Sometimes I whisper a reminder and she SHOUTS it back to everyone...


I mean can't anyone produce $1B in revenue by selling something for $10 that costs $100? Isn't the hard part doing this profitably?

Spun another way, doesn't this revenue level imply they are burning cash like crazy and might soon go bankrupt?


Depending on how much of their usage is driven by the free public ChatGPT version, I wouldn't be surprised if they could close the loss gap if they switched to paid only. I believe enough people are hooked on it that any one of them would be willing to fork out a few bucks a month.


As in every other online service market, that's a really hard sell if your competitors (eg, Bard) remain free.

I think giant companies won't mind burning cash for at least a few years while they figure this stuff out.


> Isn't the hard part doing this profitably?

Correct.


Yes.


OpenAI seem not to want my money. Like Google,Amazon, Microsoft they do not accept PayPal not sure why because other giant/big companies like Steam and Netflix accept it. I am curious what is the difference , not wanting money bad enough to implement this?

Btw I do not want a debate about PayPal or other payments method,I am aware of the issues with PayPal and I am just a due with a few bucks in there that I could have spent on GPT4 to test it but I could not.


What're the issues with PayPal that you're aware of?


They have a big commission for transactions in some countries , and some people ended up with their account blocked and having no wayu to get their money out.

I am just a user, not a merchant so I am not sure what the issues would be from that side of the transactions, I personally avoid keeping too much money in PayPal just in case they somehow block my account for some bullshit reason.

Edit: I would prefer not to go off topic, unless is related with why some big companies refuse to use PayPal and others do use it. As an example I could not buy audio books from amazon so my money went to a different company that accepted PayPal.


If a company implements paypal then people who only have paypal might pay them money that they otherwise wouldn't. However, people who might have otherwise payed via another means may also now use paypal. If paypal has higher fees this could well result in less money in revenue.

Also, maintaining each payment gateway has an implementation and maintenance cost. Add it all up, and assuming that the number of potential customers like you that only have paypal is small, and it's easy to see why companies may choose not to implement it.


So Netflix,Steam has PayPal support, OpenAI does not. Can you guess what reason applies to Netflix, Steam but not to OpenAI ?


Who knows. Every company is different, has different customer bases, different histories, different technologies. Maybe they signed a deal with paypal where they get lower fees, maybe they integrated it in a time when it made more sense then now (e.g. before apple pay and google pay were so prevalent), maybe they outsource their payment to a processor that just supports it.

I don't know the details of any of these places, I was just giving a reason why a company may not implement a particular payment processor. It's based on a balance of factors, not just a simple "support X get more paying customers."


Why would they only accept credit cards? Stripe can do IDEAL just fine. Getting a card just for this service is a waste of money.


It costs a lot less to reverse or cancel credit card transactions.

Also all sorts of insurances and financial products are just available for credit card transactions.

Unless you are a bank or something similar, it makes complete sense to just support credit cards.


iDEAL transactions are final. We are really spoiled in the Netherlands with such a simple system. If you want your money back you write to the merchant. If you rent a server to do some calculations (hallucinations) for you you would have to prove the digital product was not what you've ordered. You are free to give refunds when you like of course.


Good for OpenAI. They took a lot of this fantastic machine learning research and made a really useful service that a lot of people like. Really glad that ChatGPT is a thing and hope they have more success.


Considering the speed at which search results have been turned to trash, I'm not surprised they generated a lot revenue off that feat.


If one wanted to invest in OpenAI or companies leveraging their technologies, what would HN recommend?


You're too late, the gains are already priced into nvidia

Maybe choose an industry you think will become automated the quickest: whatever business can lay off all its labor first can distribute the newfound profit to investors instead


Buy MSFT? They have a good investment in OpenAI and their involvement may bring growth in their stock.


Because MS owns half the company, you could invest there but would also be investing in the rest of their product lines. If you are an accredited investor, Microventures and perhaps some others allow investment in pre-IPO shares.


You can't invest in a non-profit. j/k


OpenAI is less good as an investment than their revenue curve would suggest because the US government is likely to cap the size of future training runs (to reduce AI extinction risk) and there are plenty of open-source offerings suitable for the work of fine tuning and otherwise exploiting the economic potential of models of the size of GPT-4, so it is hard to see how OpenAI would be able to construct much of a moat by which to protect their prospective future profit margins.

The idea that AI research must be regulated to prevent harms, including human extinction, is ridiculed on this site and in Silicon Valley, but policy is decided by representatives for the entire US voting population, not just the technologists of Silicon Valley, and polls show widespread skepticism in the broader population about the wisdom of continuing to create ever more capable AIs. Also, the people lobbying for a cap are disciplined, smart and well-funded.

Companies that focus on applying the models we have now will probably do OK (e.g., Google's using something analogous to GPT-4 to improve their Search service except that unlike GPT-4, Google's model can cite its sources) but OpenAI is not doing that: their focus is to increase capabilities trusting that it will become clear in the future how to use those increased capabilities to produce economic value, but that plan doesn't work very well if the government imposes a limit on the increase of capabilities.

Don't say I didn't try to warn you.

ADDED. The leader of OpenAI says that it cost OpenAI roughly 100 million dollars to provide the computing resources necessary to create GPT-4. Once that "foundational model" model was created, it could be run (deployed) on much humbler hardware; some open-source competitors to GPT-4 for example can run on a beefy Macbook. Naturally there is no practical way to stop people from running a model once it has been released to the public, so regulation will target the "training runs" such as the aforementioned training run that produced GPT-4.

The point is that there are many ways to "fine tune" a basic model to increase its economic value. GPT-4 by itself for example is useless as a search engine, but Microsoft fine-tuned it to produce something (Bing Chat, a.k.a., Sydney) that has some hope of becoming a popular search engine. You probably want to restrict your AI investments to companies good at this fine-tuning process.


We've been waiting over a decade for regulation to come to the social media space. Why do you think the government will be any different with AI?


The embrace of social media by the mainstream population was probably a mistake, but it is a mistake that our society can recover from whereas if humanity makes a mistake with an AI that is more capable than the most capable human institutions, there's no recovery because there is a good chance there is no longer anybody left alive.

More to the point, decision makers, e.g., in the US Congress, who were contemplating regulation of social media 10 years or 5 years ago knew that any harms caused by social media could be recovered from and can be probably persuaded that a mistake during "frontier" AI research probably cannot be recovered from.

The people pushing for regulation of AI research do not need to convince the decision makers to the point of certainty. If the average decision maker for example comes to believe that there is a 10% change that AI research will kill everyone (if left unregulated too long) they will shut down "frontier" AI research even it means lots of Silicon-Valley types and AI-sector investors get very sore at them because most decision makers (at least in the US) have grandchildren or the hopes of having grandchildren one day.

This is in contrast to the situation with social media 5 or 10 years ago, in which if a decision maker is only 10% certain that social media is harming the population, he's not likely to see that as a sufficient reason for making enemies in Silicon Valley and in those invested in Silicon Valley and as sufficient reason for doing the work of getting a law passed.


same as invest in SpaceX.

you invest in a company that have investment in SpaceX for example buy Google stock.


Nvidia.


Do they have a plan to pay royalties for all the copyrighted materials they obviously ingested into their system?


Probably not, given that model training is considered Fair Use under US copyright law.


"Fair Use" is not a thing that an act automatically falls into, it is a blurred line that can be argued in court (see what I did there?)

Being that fair use provisions in general protect "criticism, comment, news reporting, teaching, scholarship, and research" [0] I really don't see what it has to do with a commercial entity reproducing the content of creative work.

In particular I feel that OpenAI loses in the following test:

"4: Effect of the use upon the potential market for or value of the copyrighted work: Here, courts review whether, and to what extent, the unlicensed use harms the existing or future market for the copyright owner’s original work. In assessing this factor, courts consider whether the use is hurting the current market for the original work (for example, by displacing sales of the original) and/or whether the use could cause substantial harm if it were to become widespread."[0]

FWIW I support the abolition of copyright, but I think OpenAI should be lobbying to change the law if they want to argue that everything is a remix and no one owns their creative work.

[0] https://www.copyright.gov/fair-use/


It's not. Maybe it will be, but at the moment there's no clear characterization: it could be fair use, or the biggest copyright infringement in history. Courts will tell, unless the legislators take a step before that.


Not a lawyer - just a guy in this space who also has a lifelong passion for hip-hop/rap/pop.

This case is relevant[0]. I think there is a pretty strong argument that just about anything these models spit out is what the music industry would call a "sample" or "remix" that requires clearance and royalties.

The Gaye case provided kicked it up a notch - winning damages for the "feel" of a song.

[0] - https://ethicsunwrapped.utexas.edu/case-study/blurred-lines-...


The music industry thinks every sound is a remix at this point, it's pretty bad example of where we want to go with copyright and ip if you ask me. Most artists get jack shit, every single thing is considered a remix or sample, and the majority go to the few at the top.

If you simply hum a tune you make up on the spot a music industry lawyer would probably declare you owe them money.


You may be right and I don't like it either but what matters is they win court cases.


No, but they do have a plan to lock down use, make people pay for it in ever-increasing subscriptions, and retain a perpetual license to all outputs.


This might not age well: "OpenAI likely to go bankrupt by the end of 2024: Report" [1]

[1] https://www.livemint.com/ai/artificial-intelligence/openai-l...


Higher-than-expected revenue does not imply positive cashflow.

The economics at this scale are complicated.


You can spend yourself into the ground at any scale. :-)


revenue is not profit


> … The Information reported, citing an internal source.

Luckily the projection is from an unbiased and reputable source that can be held responsible later.


Original article, but I don't know how to pass its paywall: https://www.theinformation.com/articles/openai-passes-1-bill...


80mn MRR extrapolated out somehow becomes over a billion in Revenue


unsurprising as AI is the hot trend for 2023, despite the confounding reality that most if not all "AI" has largely been a parlour trick that writes sentences similarly to humans sometimes.

before AI it was cloud. im starting to think now that theres nothing left to sell, we just invest in memes and ideas and form a tacit communal agreement that the outcome is what we want.


I work with AI daily in the form of Github Copilot and ChatGPT. It's not a parlour trick, it's become an indispensable tool. I feel handicapped when my internet goes down and I don't get Copilot suggestions as I type.

I've been programming for over 20 years, and this is the biggest single productivity jump I've experienced in my career.


I don’t find your LLM takedown that convincing, but AI is more then text. I imagine you also find the music generation, text to speech, speech to text, language translations, image generation and video creation to also be parlor tricks?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: