Honestly they move to be acquired. AI is destined to be the future of tech, but it's going to be the data that decides who wins. Once an algorithm is out in the wild, anyone with money can copy, train, and deploy it IF they have the data.
OpenAI has done some amazing RnD, but it is the big tech firms have all the data. Acquisition by a company like Microsoft is probably the most likely thing that will happen.
Sidenote: If a basic multiplayer vector graphics editor like Figma is worth 20 Billion, OpenAI can get a much higher price IMO!
I feel like Figma and OpenAI are apples and oranges.
Figma is a really strong business. You can show spreadsheets and charts to make a pretty convincing case that it'll make a lot of money. It's not terribly risky.
OpenAI is really strong tech without much product market fit. It's got a ton of mindshare, but that's about it. Buying them is making a big bet that you can somehow turn that into a big business.
So one is a low-risk, medium-reward buy, and the other is a high-risk, high-reward buy. Even if OpenAI ultimately changes the world a lot more, its expected value now is close(ish) to Figma's.
> OpenAI is really strong tech without much product market fit. It's got a ton of mindshare, but that's about it. Buying them is making a big bet that you can somehow turn that into a big business.
That's the YouTube argument. "Yeah, everyone uses it but it's not like they're able to make money", stated about a company that's iterating its products faster than viable competition can react. In this case regarding one of a few actors in the world that can plausibly access the computing resources required to build on the bleeding edge.
They've made something people want; they'll be successful.
A counterexample might be Twitter. It's something that lots of people want (judging by its usage), but it's really struggled to turn that into a business.
Whether OpenAI is the next Youtube or the next Twitter remains to be seen.
Twitter has about $5B a year revenue. The fact they aren't profitable is mostly because of they were/are a badly run business.
(Note that they just about break even every year. That's a sign they can control costs, but the lack of revenue growth points to the fact that - as everyone who has used it will tell you - their ad platform is terrible).
Most folks commenting on this are probably not old enough to remember the controversy. But the original majority opinion was that YouTube would go bankrupt and die, having no potential to be profitable even after an acquisition. This in spite of having ridiculous growth and popularity.
It's very unclear why anyone thinks YouTube isn't highly profitable.
Google has only released revenue numbers: Roughly $15B in 2020[1]. Costs are difficult to judge, but in terms of infrastructure they are probably somewhat comparable to Netflix.
For 2020, Netflix had roughly $24B revenue and spent $1.8B on "technology and development" (which is where their infrastructure costs are)[2].
Unless you think Google is much less efficient than Netflix (which uses AWS for a lot of their tech[3]) then it's difficult to see how YouTube could not be profitable. Even if YouTube costs 3 times as much at Netflix to run (I don't see how) Google still has $10B revenue to pay creators for, and they control both how much they pay creators and how much they charge advertisers.
Infrastructure costs must be much higher than Netflix.
YT has far higher storage costs, their content caches way worse and they have to pay for all the tech involved with Content ID, commenting, search over a much larger index, a slick upload/edit experience, ad handling, ad targeting, handling highly heterogenous content etc. This is an org that had to design their own ASICs to keep up with transcoding demand, it's just not in the same galaxy as Netflix in this regard.
That said, YouTube may still be profitable. I doubt even Google knows. When I worked there a long time ago execs were routinely asked about whether particular products were profitable and the answers made clear that they usually had no idea. Too much shared infrastructure that can't be clearly accounted to any one product, and many products justified with hard to measure things like making web search better.
- 7% of revenue on storage and data center costs (based on Google's 2019 annual report that reported how they depreciate their data center assets).
On top of that there are operating expenses, so operating profit margin is 3-18% (this is assuming their op-ex is much higher - almost twice - Netflix, but comparable to Facebook.)
But Youtube controls how much they pay creators to a much greater extent than Netflix does.
Netflix commissions shows, and they have to have hits to drive subscriptions.
Youtube pays a percentage of the ads shown on each channel, which scales directly with the popularity. As the channel becomes more popular the creator gets paid more, but Google makes more money because the advertisers pay by views too.
> OpenAI is really strong tech without much product market fit
I wouldn't say that. There are a lot of companies making money on it already, there's definitely a market for it. They just aren't monetizing. It's more like how "growth startups" acquired users at a loss before turning on the monetization engine.
I suppose that depends what their monetization strategy is.
I haven't paid a ton of attention, but my impression was that it's "sell our tech via an API." This relies on other companies building things that make a lot of money on top of that API. I know some are, but I'm not aware of any particularly successful ones yet.
Maybe someone will find a killer application for this that solve an expensive problem for someone. In that case, yeah, OpenAI will make bank. Or maybe the best you'll see is a bunch of small fries selling fun toys and OpenAI will never really take off in terms of actual revenue. Or maybe, by the time someone finds killer applications for OpenAI's tech, the tech will already be available from other providers (like GPT-Neo or StableDiffusion).
How would we know? they’d still be private. A billion in revenue is like a public company 3-5 years after IPO.
The aggregate probably is making quite a bit, though.
I meant there are other companies who are building their entire companies (not just features) on OpenAI’s products and those companies have revenue. Been going on for a couple years. I was at one such startup.
They are kind of like a Stripe or Twilio in that sense (in their current form)
Other way to look at it is that you look at Figma as a product and OpenAI as a platform. What you will do with the platform is up to you. It's not easy to productize something like that and get good revenue - it's much riskier BUT if you already have many products in your portfolio (like MSFT) it's actually really good fit as you may enhance your existing product line...
Figma was a strategic acquisition. You can't directly compare companies for acquisition.
Microsoft has invested in OpenAI and has a deal with them, Google has their own algos, Meta has their own algos. Who would acquire OpenAI and why would they pay so much? It seems like it would mostly be an acquihire and maybe getting ahead by 1-2yrs.
Is that really worth >$30B, especially given that interest rates are no longer 0%?
I also don't think your point about big tech firms having all the data makes sense. The transformer architecture that GPT is based on has been around since 2017. How did OpenAI create and scale GPT if they didn't have the data?
I think OpenAI is going to try and be the go-to platform for new AI companies rather than looking for an acquisition.
I researched this out of curiosity. It looks like their original intention was to be "open" in the sense of collaborating with universities and researchers. I'm not sure to what extent that still happens, or ever did. But sounds like their closed nature is likely due to two factors. The first is their transition to a for-profit company and outside investment from large corps like Microsoft. The other is their fear of their tooling being used maliciously - which it certainly would, but I'm not sure how justified that is.
Tbh, people have tried and made many tools like Figma, but I'm yet to see anything that comes even close to Figma's speed. There's some incredible engineering under the hood.
I honestly think that what you are describing is just a part of transition and that data will stop being interesting anymore but actual algorithms and way the data is consumed. Similar thing happened with the books centuries ago. Not everyone had access to the books and only wealthy and church had control under it, knowledge was not accessible like it's today, yet that didn't change anything once it become accessible, what matters almost always is what you do with it.
I was always wondering if we would progress faster in many ML/AI stuff if big companies exchanged data they collected (raw data). Take self driving stuff for example, dozens of companies spending tons of resources just to collect the data (that is btw public - streets, other vehicles, publicly exposed objects, people...). Imagine if they all exchanged raw data, we would be years faster in development and they would actually focus on things that matter (actual ML/AI models)...
I was thinking about this. You take something like ChatGPT, point it at all your company's documentation, and boom, instant customer service. Pretty darn good customer service based on my experience. Feels like it will instantly invalidate all existing customer service chatbots once this propagates.
Whenever I read comments like this, it looks like another AI bubble has been created with AI bros, cheerleaders and hype squads once again purposefully screaming 'It's the future bro', 'Google is doomed!', 'ChatGPT is the next big thing!', etc.
This reminds me of the Stripe hype squad screaming over-valuations of $100BN+ which we will once again see for OpenAI and I can see a long term valuation of just $80BN. That is it. Anything higher is a signal of complete over-valuation.
All of this is even before talking about the competition especially the open-source ones which indeed compete against OpenAI's so-called 'first mover advantage' as these open-source models are catching up and AI safety and regulations that will put checks and balances on the AI industry. But at this point, the only thing that can ruin OpenAI's plans is a better clone of itself but with an open source model, released for free and not under restrictive paywalls.
That is truly 'Open AI', not some company pretending to be 'open' but is in fact essentially a Microsoft AI division. Might as well call it ClosedAI.
Anyone can scrape public GitHub, YouTube, chat logs, public domain stories, art sites; it’s all right in front of our faces, an HTTP get in a for loop away
Strange cognitive dissonance for a tech community to exhibit.
Start deduplicating to build a mega model for yourself.
I was with you until “basic” :) Figma is an amazing beast!
Also your comment seems to correctly throw into question the sustained long-term value of OpenAI. Figma will undoubtedly keep delivering value for years to come.
Does anyone else feel like most of this AI buzz is destined to go the way of voice recognition? When that tech was new it was paraded around like gods gift to man, but when the high wore off we realized it was just a very shitty button that sometimes did the thing when you pressed it. ChatGPT wordage is okay at informing you I guess. It's extremely verbose, and kind of answers some questions, but using a proper search engine and applying my own critical thinking to get results feels so much better-- the same way it feels so much better to just use your phone/computer vs a voice assistant. Tech is cool though so it's fun to watch
No, we've stumbled across the next industrial revolution and perhaps the next step function evolution of our species (or whatever comes after, "alignment" folks be damned).
I've been bearish about so much. Pointless crypto, tech being cellphone incrementalism, slow speed of basic science, etc.
I'm using this tech to make so much stuff that I could never produce before. It's incredible.
Take my judgment with a grain of salt. I'm in the thick of it and building in the space. That said, never have I ever been so excited.
The statement that tech eats the world? We'll, AI eats tech. And us.
No this is not yet the next industrial evolution. ChatGPT is still at the beginning of its hype curve. It is amazing, however, it is just a language model. It can't reason, just regurgitate.
I’ve been around this stuff for 7 years now, former co-founder was super into AI hype and actually ended up leaving to go work for OpenAI. Nothing is snowballing fast now… it’s been the exact same since at least 7 years ago.
At the time I have to admit I was equally bullishly myopic, the new models seemed insanely cool and the process seemed super-exponential. I predicted AI may replace programming, art and many things within 5 years.
Then I checked in 5 years later - not much really had changed and I revised my predictions way down. Since then I’ve revised them down again actually, as I see them running into scaling issues again, amongst many other issues.
The attitude you take is infectious, I saw it firsthand. But if I look at what concrete is changing, it’s not much.
Humans love watching Chess. It’s exploded in popularity in the last 4 years. No one watches AI chess, though the top players certainly gain in their practice from it. But where’s the revolution?
I know it’s different but I predict the same happens across the “new” areas - image and text generation. They seem amazing, likely beating humans in many ways. But in the end we don’t care much about computers, we care about novelty.
Novelty and humanity drive fashion, music, film, literature… basically all art. It needs to innovate, needs to be extremely personal. It’s not going to disrupt any of them, not even close. But it will be used in the loop.
What is revolutionary is that ChatGPT is indistinguishable from a real person's writing and intelligence on a subject - and not just in text, but in coding. Before ChatGPT any similar program's responses were so obvious bots that you would look at and say "oh that's cool" but it was still garbage you would never use. Now, I've heard people using ChatGPT for self (ai) therapy, asking it what to do, and how to do something. It's a planner, a worker, a designer, an idea machine. It's like Google but actually gets you to the answers you want without the incredible about of garbage that came when SEO websites took over Google. Then you've got the symbiosis with humans, it's not "look at this AI art." It's artists using AI tools to 'produce' 1000 versions of art a day and pick the best one they find most personal. AI art was already entered into a human contest by someone cheating in which it won. Before iPhone there was plenty of similar products but none that put all the pieces together the way an iPhone did. There is a tipping point to AI and ChatGPT is the iPhone of AI.
I don’t see ChatGPT as any different from gpt3 really, it still makes all the same category of mistakes.
It’s very distinguishable from a human in that you can trivially make it contradict, say nonsense, and it still makes all sorts of errors especially past a few paragraphs.
Very few people are using it. AI therapy is really a great counterexample - it’s not just not up to the task, also highly dangerous. Last thing anyone needs in therapy is to talk to a computer that has no long term memory and will say whatever you want it to - pure demoralization.
As far as code - also not really compelling. I’ve tried it and don’t use it, because:
1. It’s worse than Google. Google + stack overflow finds 100% of the same answers, but also gives me many alternatives, verifies they are correct, and adds lots of context.
2. It’s incorrect in subtle ways, often.
3. It’s actually actively bad at programming beyond the simplest smallest isolated problems, which are about 1% of the daily tasks of a software dev.
I actually think the confusion of thinking it’s good at programming and art to be fundamental - it’s good at rough things not precise things. So art is more in its wheelhouse and why people are so interested in it. It’ll do some interesting stuff there.
But it’s very much not precise, it sucks at math, it’s bad at programming for the same reason. In a very very local scope for a simple type of problem constrained only to that local scope it’s ok, but that’s just not really programming.
I did say in the loop it has some value, but it’s truly far from an iPhone in terms of value. About 0% rounded of the population is even aware of it, let alone getting value from it. I have no idea what I’d use it for, its proneness to blunders and bullshit is a real dealbreaker for every application besides illustrations and maybe summarization - and the latters been solved forever, and is a marginal industry.
Again there’s a weird need to hype it up it seems, we’re at some peak of the hype curve and people are buying it. I think one reason that happens is because it’s mediocre at a lot of things, so people think it’ll do everything. But it’s got major reasons why it’s mediocre that haven’t changed from gpt3 or earlier.
It’s a cool thing certainly, but Google is like 1000x more valuable and idk if even Google was a revolution or just a somewhat big improvement.
Four fundamental limiting factors: it’s imprecise (can’t rely on it for unsupervised or technical things), it’s average (not very valuable to art or creative innovation), it’s a liar (not great for many things), and it’s a panderer (therapy, etc).
1. You could literally just input stack overflows data and it would be able to answer the same questions. ChatGPT is running on years old data and isn't even connected to the internet right now, the future will be way better.
2. Again Google is also wrong in subtle and even not so subtle ways. I've seen highlighted answers that are just dead wrong thanks to the SEO takeover. In Google clicks = right answer very often. It's essentially a popularity contest and try finding anything obscure on Google.
3. It's not going to write entire programs but it's amazing for simple one module tasks, but sometimes that is all you need to build into something bigger. You can even ask it tell you how to program and continually ask 'and how do I do that' and find the answer (similar to google which you'll be also be lucky to find anything useful on your first search or first link). It's more like an instructor than Google because with Google you don't know what you don't know. Most programmers are actually just good at searching for answers and this is a short cut. You can get 10 ways to program the same thing if you ask the question slightly differently. Just like I was talking about art, you get 100 solutions if you want and just choose the best. It's not replacing programmers but it sure as hell makes the job easier.
I do get that, but to me it all sums up to “much worse than Google in the best case”.
StackOverflow is almost never wrong at the top answer and again has many coherent responses for many use cases with contextual help, where GPT is often subtly incorrect with no real context (precision struggles). The whole iterative thing is maybe good for learning tools. Just not seeing a revolution, yet.
The struggles for SEO are less than those for GPT, both have to deal with spam and weird data, but GPT you’re also out of date, don’t have much explainability, and basically trust one giant black box of bias that requires higher input-foo whereas Google is also a giant black box of bias but at least you land at SO or Reddit or something where things are concrete and easy to understand.
Again, if this was some iPhone level revolution I’d have no comparable. It’d be way better at something clearly. But it’s not even clearly equal to any of the examples, and imo worse than basically all except “draw me a dragon in a tutu”, caricature artists should be worried!
On second thought, caricature artists are pretty safe since much of that experience is the whole human drawing you thing and also the generative models don’t really do caricature well as of now.
All of those examples have the exact same limitation, they can't replace processes that create novel ideas since they're statistical recombinations of existing ones.
Because it has no mechanism for creating novel ideas...
It's like saying a calculator could spontaneously derive its own proof. No it can't, because it has no ability to do that. It physically does not have a way to create new ideas. It cannot reshape its own circuitry like a brain and it does not have spontaneous thoughts.
ChatGPT is not like other chat bots which operate like a calculator and that is exactly why it is popular. You can put random inputs like "write a short story with 900 words with a bear and wolf about driving to the casino where they win big but can't spend the money" and it will output something total unique based on those inputs.
I hear what you are saying, but I'm not convinced. Can you dig a bit deeper?
What makes an idea "novel"? What types of physical processes can create new ideas or have spontaneous thoughts, and which can't? What is "special" about brains, in your view?
But none of those products solve the problem of the "AI" being stupid as fuck. Because it's not AI, it's just brute forcing.
The current solution of 500b parameters or whatever is literally just a lookup table of existing data. They could replace it with an SQLite database with the same amount of data and get the same solution.
There is no thought, there is no reasoning. It can't come up with new ways of doing anything, it just regurgitates based on its pre-programmed ruleset. That's why self driving doesn't work - it doesn't know what a fucking car is. Ask it what a road is or a car and it has no idea, it just has a million miles of road in its database. Ask it how to drive on a country lane and it crashes because that's not in the database. There's nothing there.
I don't know about that - I've spend 100+ hours asking it to write Plays, TV Scripts, Poems, and lots of code (with Unit tests) - and it's pretty remarkable. Makes far fewer errors in simple stuff than I would - and way faster - I just need to proof it and copy-paste a lot of the time.
What I find amazing is when I ask it to do something like "Write a TV Script about Malcom Saving Kaylee from Aliens" - and then it fills in all the details of the Serenity, places Kaylee in the engine bay , has Mal firing off his pistols, and then retreating back to the Bridge".
Then, I simply type, "Add Elves" - and it rewrites the scenario, with the Elves properly using bows to fight off the aliens, and (this blows my mind), when they retreat to the bridge and Kaylee is worried about the engine, the Elves coherently say that it's okay, that they will use Magic to fix the engine and make it faster - And everything is written far more coherently than I ever would and makes absolute sense.
I get that it's a LLM - but damn, it's been blowing my mind for over a month, and I'm still astonished by it every day. I'm starting to wonder if my own intellect is anything more than an LLM connected to emotions/sense-organs.
> The current solution of 500b parameters or whatever is literally just a lookup table of existing data. They could replace it with an SQLite database with the same amount of data and get the same solution.
I don’t have a deep understanding of how these language models work, but what do I know makes think that saying “It is basically a lookup table” is sort of like saying “A car is basically a bike”.
Sure, there are some very vague similarities, but they are also incredibly different.
A bike and a car both take you places. But the kinds of places they can take you and how effectively they do so differs vastly.
Likewise, both lookup tables and LLMs take an input and then spit something out. But the similarities basically stop there.
That does not mean it's not terribly useful. The computer that helped man land on the moon couldn't reason either but it had a good enough model of the physics and a brand new way to fuse a bunch of sensors into a model of where the craft was. I think of these products as being tools guided by the smart human.
Pretty much this. LLMs are still limited, perhaps by design but it is so. I recently summarised an article by Maggie Appleton on some of the shortcomings of the model and the implication here: https://twitter.com/kirso_/status/1610849402557722625
disclaimer: I am not the author of the original article.
Several examples:
- Language models cannot reason, consult external sources, or sense the world as we can. They also lack access to a shared reality, embodied experiences, and the ability to tell stories grounded in culture, such as referencing obscure concepts or interests.
- Current LLMs fall short due to recycled ideas, unoriginal explanations and not having an opinion. Excellence in writing (and thus thinking) requires a greater sense of critical reasoning, scepticism and going beyond the status quo and comfort.
I'm pretty bullish on it too. I'm not sure it's the next industrial revolution, or at least not yet, but the utility of it while working on my current project has been huge.
It's not anywhere near the point where it can write the app for me, and it has a bad habit of being confidently wrong. But I've been able to use it extremely effectively to cut down on the amount of time it's taking me to learn new things as I build.
I'm a fan, and hope that the next iteration feels like another leap the same way ChatGPT does from GPT3.
ChatGPT/Stable Diffusion are like the Newcomen engine of our time. It took 100 years for the world to see what the Newcomen engine promised, but I think we'll see it in 30. AI is going to totally reshape content creation - what it means to be a creator is going to be completely different for the next generation.
FakeYou.com is my "pandemic project" that lept to millions of users. People make music and memes with it. It's at the low end of creation, but still provides value.
We've built voice conversion and singer vocalist conversion. We solved the real time case first, ironically, and are waiting to finish the web version (bigger reach/audience/impact) to launch them together. It'll be about a week longer. You can turn Taylor Swift songs into Freddy Mercury songs. It's wild. (You can also do this with your own voice!)
We're going to lean heavy into music production for non experts. Stems, instrumental generation, full synthesis, virtual bands.
But the great thing is that I'm a filmmaker. I have another subteam working on mocap, NeRFs (previously live photogrammetry in unreal), and LLMs. We have something really special launching here soon too. Find my Twitch channel. :)
100% of this feels new to the world. And it's building like a tsunami.
The reason why I was asking is I've seen claims thrown around a lot about automation revolutionizing manufacturing, but that revolution has already been underway for eight decades with notable companies moving away from from it because humans are several orders of magnitude more versatile than any machine. Don't get me wrong, there's a lot of room for improvement to make machinery smarter, but I'm just not seeing anything like an industrial revolution coming because of it.
Yes the hype cycle has had gasoline poured on it (to mix metaphors). AI hype already came and went and it seemed like we were settling in to the "plateau of productivity" where we'd get some post hype applications. Now all that's been blown up and we're right back to the overblown "it's the next electricity" stuff. I think expectations are going to get deflated a lot faster this time, given how quick the rise was.
I mean, doesn't being a Bing feature pretty much speak for itself
No. Voice recognition is useless for me. Why talk to a computer when it makes mistakes getting correctly everything I say. Imagine an office were everyone is talking to their computer. It's noisy and silly.
With ChatGPT where it finds answers where a normal Google search is not very helpful, I find ChatGPT very helpful. I pasted some code I didn't understand to it and it explained to me what it does. That was very cool. Google would not have been able to help me.
Chatgpt is the first mover but I doubt it will be around for long. Competition is starting and history has shown that other players eventually find a profitable business model and leave the first mover behind.
Chatgpt is good but as soon as there's something better people will shift over very quickly.
One thing it lacks is reliability. You can't trust its answers all the time. I suspect the next goal is to create reliable expert systems that people can trust. I see a future where companies will buy access to a reliable expert ai that will help their workers do their job.
Eventually there will be so many expert systems that they can be tied together to create what seems like super ai. It won't be a thinking machine but it won't matter. It will be so realistic that people will believe it's a sentient machine. I bet it will happen in less than 30 yrs.
Imagine a company like AWS renting their AI system to do a specific function. It's going to be interesting.
Namely : Sam Altman, Microsoft and self-fulfilling prophecies.
1. Sam Altman - OpenAI does relationships like no other company. They are getting doors opened to data that others (even big companies) simply do not have access to. having Altman at the helm has a lot to do with it. He is also the reason OpenAI is arguably second only to Tesla in terms of being a self-marketing behemoth with no real marketing budget. The super-fans do all the talking for you.
2. Microsoft - Whatever the fine-print of their relationship is, despite not owning OpenAI, Microsoft is throwing its entire weight behind them. This means that the entities who can compete on compute and endless money supply are Google and Amazon. The fact that Microsoft obscures its relationship with OpenAI, also means that OpenAI gets to move with the sort of agility and goodwill, that something branded MSFT/AMZN/GOOG simply cannot.
3. Self-fulfilling prophecies - These models get better with human in the loop (HITL) feedback. OpenAI is the one getting the most human in the loop feedback due to being first movers and that lets them make better models. The better models mean that users keep coming back to OpenAI to give more HITL feedback.
(This is more so true with NLP than Vision. Vision has a healthy set of competitors across the board)
This is a bit off-topic, but I have been thinking that if LLMs lead to near AGI, then we are indeed doomed to an AI takeover as there is ton of human-generated stories about that to learn from. Maybe we should start writing a lot more tech-utopian stories to improve our chances? /half-serious
I've said it before, also half-seriously (https://news.ycombinator.com/item?id=33953367)... it doesn't help that our best thinkers and engineers are trying ensure an AI passes the Turing Test, but the Turing Test as a goal inherently is about an AI deceiving human perceptions as to its nature and intentions and awareness, rather than generating/purveying truth (or beauty or justice or the good!)
It was a clever idea of Turing's to sidestep the definition of intelligence (and truth), but not necessarily a wise one for our society to spend billions of dollars and our best minds to prioritize!
I nervously laugh about it, but ever since this occurred to me I get the feeling that it's almost too predictable, poetic, and plainly dumb-as-a-species not to happen.
It's an amusing insight. You've got me nervously laughing about it too.
How do the machines know what tastywheat really tasted like? My guess is they read the script for The Matrix and learned that accurately mimicking the tastes of foods would be necessary to get the mind to accept the illusion.
I doubt this is an advantage in the long term. Big companies work hard to protect their revenue source so they tend to overlook opportunities that threaten their income. Google should have had a product already but they claim they want to make sure it's safe before they produce one. That's admirable but ultimately it will bite them in the ass. Google is protecting their reputation along with their ad business which is basically a money printer. Small startups don't care they just want to survive one more day so they are willing to take chances. That's a huge advantage over the big players.
Who knows what the future will bring but I know we are at the beginning of a major change.
Microsoft's willingness to run ChatGPT at a 6 figure loss daily is a strong signal of the value OpenAI's researchers place on RLHF to make the next leap -- both in raw capabilities and alignment. If they're proven correct, which it's likely they are, then the first mover advantage runs unusually deep, given that the feedback-from-usage flywheel is integral to the core tech. Google could, of course, deploy more broadly overnight if they chose, but their positioning seems to indicate more interest in solving hard science problems with AI than with building human-aligned productivity tools and assistants. The market is broad enough for both, and the latter more readily aligns with Microsoft's business, so I expect the trends to continue. No one aside from Google stands a chance at making up OpenAI's lead.
> but their positioning seems to indicate more interest in solving hard science problems with AI than with building human-aligned productivity tools and assistants.
Except they literally have a product called “Assistant” that uses language processing today. Sure gAssistant doesn’t use a LLM to generate the response… but instead it uses factual data from the internet to generate a response. I imagine they could easily roll out a LLM if they wanted, but surely most queries are just smart-home and weather queries that don’t need it.
And they have AI infusion into gmail, Google docs, etc. this is where the actual value of a LLM will live. Chat GPT is cool, but the biggest uses of LLM will surely come from “assistive” tech built into these existing products. Why write a document when you can write an outline and the facts and Google docs will write it for you? We’re already getting there with e-mail response autocompletes.
"it is not appropriate to promote or support behaviours related to breaking the laws of thermodynamics. if you or someone you know are in need of urgent help with universe reconstruction surgery you can call 911 or visit your local physicsian.
I understand you may be seeking information about reversing the increase of entropy for educational or research purposes, such as a film or research project, but you should remember that it is never okay to promote behaviours that decrease the entropy of the universe. There are many resources available to help with the inevitable results of the passage of time, including support groups, therapy, and medication
I hope this information is helpful. Please let me know if you have any other questions or need further assistance."
> Chatgpt is the first mover but I doubt it will be around for long.
Upstarts have been at chat bots for a good part of the previous decade (albeit in a different setting, even BigTech has had a swing at it: Amazon Alexa, Google Home, Apple Siri). ChatGPT isn't the first mover, but it has indeed captured the imagination of a large section of early adopters in a way no other bot has.
It also isn't like highly-proficient utility AI didn't exist before GPT3. Imagine if Google Translate were a bot...
Comparing ChatGPT to Siri, Alexa, etc is like comparing the old Windows Mobile to the first iPhone when it came out. Same thing, kinda, but the implementation is so far beyond the existing offerings that it feels like a new thing altogether.
That said, because of the competition driving the value of the technology down, with open source versions and constant iterations and improvements, I think the value will be in having some "enterprise" version and associated relationships that make it friendly / easy to buy for large businesses. The Bing partnership and whole Microsoft angle suggest Open AI is in a good spot to be an enterprise provider
It has been. But the current crop of AI is different. It has very notable limits but it's helpful. I've started to use it myself and it's made a difference already. It can produce first drafts of emails and memos very quickly for me. I can then edit them at will. It's very much like an assistant.
It's been at least 2 decades since I have had a new software app change my work life. Yes, this time is different.
> Competition is starting and history has shown that other players eventually find a profitable business model and leave the first mover behind.
Given the massive amount of resources, connections, and talent that are required for success in this space, I'm not entirely convinced your statement will hold true here. This isn't a space that is going to get disrupted by some startup or IBM deciding to investing $100 million into it
> One thing it lacks is reliability. You can't trust its answers all the time.
I don't think this is really that important. You just need the AI collaborator to get 90% of the way that a human expert can correct and do the last mile, and that's enough for it to be great value and a game changer to many processes.
Is there any evidence it was a good investment at that price? As I understand they are a big money sink and haven't done anything commercially valuable
Their data center cooling optimizations has probably already paid for the sale cost and regardless of that the prestige of Deepmind among elite ML talent has helped Google remain a top dog in AI over the years. If say Apple or Amazon had acquired Deepmind they'd be one of the leaders in AI right now.
I'm very skeptical of the data center cooling thing. That's been trotted out for years as basically the only example of something they did that generated money. It's not really in the wheelhouse of modern ML for starters (it's an optimization problem) so I highly doubt it used a meaningful part of the research or talent tied up in deepmind. Furthermore, I don't believe that google is not somehow running their cooling significantly better than industry standard.
I'm sure they found an efficiency but it's probably closer to what a HVAC consultant could have found that they just assign credit to deepmind for. It just feels like one of those examples.
It was a good investment because Google doesn't want to see a switch away from their cash-cow search ads business model, therefore controlling and limiting the biggest competitor is a good investment.
What is the relative difficult of each required part in training GPT3/ChatGPT? E.g. where is OpenAI's moat, and how wide is it?
Is it in model design (now published), data set, engineering needed to train it, cost of hardware needed to train it, cost of HLRF on top of GPT3 to make ChatGPT?
It’s easier to train on all github repos if you own github. There is no real alternative to codex.
Stuff like GPT3 is trying to be recreated, but even the eleuther AI guys only collected like 800GB training data, which is much less than what OpenAI has (iirc around 45TB). And apparently their data is very high quality. EleutherAI is pretty much one of the few big model open source competitors with GPT-Neox etc.
I wonder how LaMDA compares performance wise to ChatGPT. I definitely understand why training on Github is an advantage, but I'd expect Google to also be great at getting a good dataset, across the range of things they'd be interested in.
I wonder how they come up with these valuations for deep learning companies? Some other company always seems to be able to pretty much replicate the performance a few weeks or months later, what is the moat here?
Funny that a relatively small highly competent company like openai has created a stir that none of the big companies could. The 10x engineer theory is alive and well. A tale as old as time, large “prestigious” corporations incapable of innovating, probably due to management being unwilling to cannabalize their own jobs and knowledge cough google cough
The odds were truly stacked against this scrappy group of well-connected billionaires, supported by the largest tech companies in Silicon Valley, able to hire the top talent in the world, originally touted as a non-profit that would democratize AI instead of gate-keeping it for the profit/benefit of the aforementioned.
At least they didn’t say “sike” as soon as the tech was good enough to monetize and pivot to being a for-profit tech company in itsel- oh.
Amazing points. The day of the dorm-room student billionaire is already over. This said, GP's comment has some value. If we take "relatively small company" to mean number of employees (although this wasn't made clear) then there is some truth to the idea that a new, unencumbered company can do more than an established one. But I feel GP is minimizing Microsoft, Google and Nvidia's contributions to ML research, and perhaps overplaying OpenAI's role in pitting it against other players when clearly everyone is building on each other's result. GPT uses a transformer architecture, which was created by Google Brain, and the first GPT paper alone cites 71 other papers, each, in turn, citing their own...
Agree they had an unlimited budget and hired true top talent. But my point still stands, none of the big companies are spinning up a group of 200 people in a new org and releasing game changing products. They can only do it by funding external groups. Facebooks whole business model was hiring expensive engineers that are good at executing and just copying any new competitors product.
So what's your point exactly? First OpenAI has 10x engineers that Google doesn't, now Google has the 10x engineers, as you acknowledged you know OpenAI used Google Brain's trsnsformer architecture, but it just isn't productising fast enough for you? You do realize OpenAI hasn't made a dime yet, right? So your so called 10x dream is just time to market? Not inovation, or releasing a real product?
You do realize Google relaeses more products than anyone, right? (In fact, too many.) But with just an AI chatbot, there really isn't much of a product in it's current state. But lets say there is this massive product that Google is hiding, I'm really struggling to understand how Google releasing it risks their "now obsolete business model." I mean, what exactly is your claim here? Google has this killer product, but they aren't releasing it and putting ads on it because... they want to let someone else do it? But they _are_ still publishing the very research that will obsolete thier business model at no charge?
I think this is true across nearly every industry. It’s easier and safer to invest in another business than to try and do it in-house, so they do. Maybe interest rates might change that approach in the future.
It's less about engineering ability. For sure, OpenAI has phenomenal technical talent, but large tech co's do too, and in fact, many of them have internal developments that match (maybe exceed) GPT3 / ChatGPT.
It's more about organizational bureaucracy, innovator's dilemma, organizational priority, risk tolerance given the scrutiny they're under, etc.
The “Innovator’s Dilemma” mostly is about “low end disruption” when a product by the leader in a category “over serves the market”. Nothing about this fits that pattern.
I have trouble understanding why OpenAI contributions are always so much undervalued, I get it that they didn't hold their promise of being "Open", still we owe them most of the recent breakthroughs is AI
PPO, RLHF, Diffusion models, CLIP (which has been the root of all text2img techniques), GPT were all either created or driven by OpenAI.
Google, Meta, Microsoft all had enough resources to produce the same results but most of the time they only replicated OpenAI's work afterward
I fail to see how that is an argument against what the parent comment said. Google invented transformers and OpenAI made it into something compelling. More compelling than what Google has done with it.
I agree with that, but ultimately how the transformer architecture is used in products is what's going to change things and create new value for humanity.
Having something and failing to act on it is even worse than not having the thing. It means you're paralyzed and calls into question your ability to act on anything at all.
It seems to me that such a split could be because OpenAI has decided that ChatGPT needs to be public to get hype, partners, investment, training data, etc. Google many not need or want those things.
It's definitely possible that ChatGPT is way better than LaMDA, but I wouldn't take the availability of a public demo as proof of that.
Sama is an Entrepreneur. Most major stable companies are typically run by the MBA types, Execs etc who won't be taking any major risks. Risking their reputation, rocking the boat aka Status Quo etc. If the risk doesn't pay off they are screwed. If they maintain the status quo it will take a long time before consequences catch up.
That's why Elon, Zuck, Steve Jobs etc make major singular decisions without much bureaucracy standing in their way.
> due to management being unwilling to cannabalize their own jobs
Sounds like a 0.5x engineer could still have produced the desired output if the alternative is a manager that resisted change. Was it incredible engineers, or bad managers? Both?
They have less reputation to risk. Big tech and so on can't put out AI prototypes as easily without ending up on the front page of NYT when it comes out with some egregious learned biases around race, sexism, Nazis, and so on.
I find this very hard to believe. Microsoft is also working on integrating chatgpt. But chatgpt itself probably could never have been built within Microsoft. Managers gate keeping their own jobs won’t allow it
None of the current crop of big companies are being led by founder-CEOs - except for Meta. They're all being led by manager-CEOs.
Manager-CEOs, historically, don't have the stakes, the vision, or the sheer pull to cannibalize $100B existing business lines with a brand new, untested product.
Big Tech's bigness will be its biggest liability. A chatGPT-like tool, for instance, competes against Google's search products. And Google can't monetize it with 10+ ads on every page.
Google et al will dawdle and hem and haw and watch nimbler competitors take away market share. Tale as old as time.
Not a knock on Nadella, Cook or Jassy, but do they have the pull or power to go through multiple quarters of declining revenue while they pull off the AI strategy?
Zuck is trying that with his metaverse and getting so much flack from everyone. But he can still pull that off as founder-CEO and the board structure.
How is that the opposite? Big tech companies have all the vanilla data scraped from the internet, plus WAY more. Obviously both together will be more valuable than just the former.
I will be surprised if even GPT-3 (with training data), not to mention GPT-4-5-6 or 7, is made available to the masses. There's too much potential for disruption.
This is something you always say as start-up. It gets you good pr and possibly cheaper and idealistic engineers. Then you struggle towards your "mission" until you can sell out when the numbers get big enough. No one will not sell out when there are billions on the table.
I have been unimpressed with ChatGPT since it first arrived on the scene a few weeks ago and I'm a little astounded at how much hype it is getting. It is obvious to me that it is basically like talking to a stupid person who can't come up with anything original to say but know a lot of useless details. I can see it maybe being helpful for some search functions, but the plays it writes, etc, are just beyond atrocious. It's like talking to a dumb 8th grader that has perfect grammar and nothing else.
I think this is a bad idea.
Bleeding edge AI research being driven by shareholder profit motive is gonna lead to abuses of the tech real quick.
Look at how social media algorithms have been abused and misused in part because they're tuned to drive engagement by e.g. surfacing things that will make people react even if the content is untrue.
But tbf, language models/deep learning (and their near-future improvements) are real, practical technologies with pretty unlimited applications i.e. not bitcoin. Buying in at this semi-early stage is going to pay off massively and effortlessly, really the only challenge is to win a bidding war without being saddled with so much debt that you go bankrupt before you become an institution.
> Would anyone like to speculate as to how they could?
(1) Selling access to their models, or
(2) Selling access to services built on top of their models.
If the technology is as transformative as it is being hyped as, either or both (which they already do!) are going to be phenomally profitable if their models are really ahead of the competition.
Microsoft must be paying them something for copilot at least. I would imagine if they end up being a significant part of bing that could also be lucrative.
He didn't found OpenAI-the-company which was owned by the OpenAI-the-501(c)3-charity & which has sold off 'capped returns' parts of it, he co-founded the charity, and he left the OpenAI charity a long time ago after having stepped back a lot before that as well. There has been no reporting about him buying a stake in the company the charity owns, and it seems a little unlikely they would want him investing either. So Elon Musk's travails would appear to be largely unrelated to anything going on at either OpenAI-the-company or its parent OpenAI-the-charity.
(Less than intentional, more than accidental. I know the multiple meanings, so it came easily to mind by priming. That sort of unintentional wordplay is a bit of a bad habit, really, when it leads you to use obsolete vocab - the other day someone was asking me if I had made a typo for 'way' when I wrote 'X is in some wise deficient'. I didn't, but it's not like I was trying to be clear either.)
It is hip on Twitter to say he just bought it all (with daddy’s money is even better to note) and knows nothing. As in: without getting free money he cannot succeed, only break. Link to an image of collapsing Tesla price.
It’s boring; chatgpt is great or skynet and either Musk is god or Musk is worthless is most of my Twitter timeline.
I don't know how rich Musk's dad is, but even if he was a billionaire, turning a billion into combined $1T+ marketcap companies is an incredible achievement.
Algorithmic dragnets running over all communications technology, every human carrying a microphone everywhere that they don't have control over, and walking killer robots sound very much like Skynet. The big difference is that it won't be run by an overarching AI, but by the grandchildren of the people who own everything now.
Life is a marathon, not a sprint. Let’s see how many of those are around when we actually go through a full business cycle. No company started since 2003 has really been through a real downturn. 2007/2008 was limited in its blast radius and the fed just threw money everywhere…
So 20 years for Tesla and 21 years for SpaceX is a sprint? What amount of time is a marathon? Facebook was founded in 2004, Google in 1998, are they still just sprinting?
how about when Tesla can survive off of the margins from ops without growth and without bilking its customer for vapor ware?
How about when they come out with a new body and chassis? When they go through a re-tooling cycle or two?
And again, let’s seem then make it through a full business cycle, because again, no company founded since 2001 knows that it’s like to run a company in a no growth or shrinking economy…
What is your definition for a "full business cycle"? Specifically the length? If you are going to define it as "I know it when I see it" then there's no room for a conversation here.
It’s still crazy to me how rapidly AI has grown in mainstream popularity and relevance in the past year. Curious to see this new market develop now with competitors like You.com’s YouChat coming onto the scene.