Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI and Microsoft extend partnership (openai.com)
457 points by hmate9 on Jan 23, 2023 | hide | past | favorite | 284 comments



Let me translate this blog post out of corpspeak...:

OpenAI has agreed to use Azure exclusively and for free, in return for Azure massively scaling up it's GPU capacity (which OpenAI was hitting before, hence threatening to move). In return, OpenAI gives an extensive license to Microsoft to use ChatGPT in Bing (and other products).

Now why couldn't the blog post have just said that rather than hiding all the details?


Because they didn't want to openly say "Microsoft gives us GPUs and allows us to use them for free, hence practically buys us. In return we almost exclusively license what we produce to them, for free."

Because it looks bad. Microsoft wants to be seen in a different light, and OpenAI wants to be perceived as an open, academic, capped-profit for a noble reason collective.

It's all smoke and mirrors.


> OpenAI wants to be perceived as an open, academic, capped-profit for a noble reason collective.

Does anybody actually believe this? Their name has been a running joke for years already.


Sam Altman addresses that when he was asked that question in 'How I Built This' Podcast. Essentially: OpenAI did not realize the massive scale they needed for them to be successful. When they realized this, they could not raise any funding as 'non-profit'. They asked govt, who did not want to fund it, and other sources, at the end they did not have any other recourse.

Sam Altaman says: 90% of funding was needed for compute power, but also was needed for things like buying dataset and then to pay employee so that they can compete with likes of Google to retain them. If they would not have done this, then very soon they would have become irreverent.

So to retain the earlier intent (for greater good) they put in bunch of 'safety features' around funding - Ex. 'Profit Cap' - after 100X the profit would be distributed to the world (in some way). Similarly, there were few others he talked about.

The relevant portion starts at 32:39 mark in the following podcast: https://open.spotify.com/episode/3oOX1QHLPw9uvLL5LmBk28?si=s...


It sounds like you agree with me that OpenAI is poorly named.

"We wanted to be open, but we couldn't - we kept the name anyway."


It works, I was watching some "normie" youtube creator using Chat-GPT in a face-off with him over some domain specific question and answers. He was new to the LLM thing you could tell, but said how it was an open source right in the intro. Probably a common mistake for a company with "Open" in the name


The argument then becomes how to interpret "Open", not that you both agree.

If they keep publishing their research, then OpenAI seems like a fine name to me.


They've largely stopped publishing.


To be fair, they still published papers casually and published a ton of useful OOS recently. Whisper, Tiktoken, Point-E, you named it.


Are these papers reproducible, or do they have source attached to them, if applicable?


Very few DL papers are 100% fully reproducible, in that you can get identical results and generate identical checkpoints, even when source is attached and benchmark datasets are used.

I would say the vast majority published are not even close to be able to be reproduced.

Full size LLMs are not able to be reproduced because the datasets are copyright and not distributable and you need millions of dollars worth of compute to fully train them.

If a company is publishing enough detail about SoTA architectures for researchers to ballpark reproduce, that is a win in my book.


I'm not actually expecting them to be 100% reproducible. I'm more than familiar with the work required to do that.

What I'm expecting is not holding back the secret sauce, as seen in some (or many) publications.

My gripe is not limited to DL/ML/AI scope either. When I try to compare my results with the papers I cite, I generally can't find the formulae or the detailed method to reproduce the numerical method the paper claims, and this leaves us at the dark.

All I can do is saying "Paper23 cites these results, and we surpass them at this, we are even at that, and they are better at the other thing", which I'm not comfortable doing. Not because they are not telling the truth, but I want to test my method against other methods on an even ground.


> My gripe is not limited to DL/ML/AI scope either. When I try to compare my results with the papers I cite, I generally can't find the formulae or the detailed method to reproduce the numerical method the paper claims, and this leaves us at the dark.

I agree. So many don't even both including hyperparams, even when they publish the code. The Github Issues for their code are littered with questions asking about hyperparams.

> All I can do is saying "Paper23 cites these results, and we surpass them at this, we are even at that, and they are better at the other thing", which I'm not comfortable doing.

If you are achieving better results, on the same dataset, and you are not cheating in anyway, and others can reproduce your results, then I don't know what is wrong with saying you got a better result.

Issues somewhat arise when you are using better hardware with more parameters or larger batch sizes than the original authors could have attempted. I think this accounts for the results in many papers.


It’s not even open source lol

I get the need to make money but at least contribute OSS to the community they’re building a product on….


Semantic debates are largely irrelevant. It is what it is.


And Russia never invaded Ukraine. You can stretch the meaning of a word all you want, it doesn't change the fact that the people misrepresenting the word are acting in bad faith.


What?


According to Russia, they never invaded Ukraine. Russia is acting in bad faith. The point is you can misconstrue a word all you want, but you're only damaging your own credibility. OpenAI is likewise acting in bad faith.

It's the same story with Google, and Facebook, and Tesla, and on and on and on. When the leaders are unethical you shouldn't expect anything else.


as much shit as we can give them for something as trivial as their name, their contributions to AI with GPT-1 (which beat all benchmarks & which led to BERT & was basically the "real" BERT itself), GPT-2, which was really impressive back then, GPT-3 which was an incredibly improvement that people had not see coming, PPO in reinforcement learning, reinforcement learning from human feedback in NLP, Instruct GPT (the real ChatGPT before ChatGPT), and now chatGPT


I created a snip with the relevant portion of the conversation: https://share.snipd.com/post/0bb82eea-e7a0-4d4e-b234-8471dc5...


Next level gaslighting.


Not to diminish anyone's appreciation for OpenAI's increasingly performance-art level Orwellian name but I have found myself believing that their preference would still be to secure funding that would allow them to live up to the implied openness if it was ever an option. Sam Altman has stated over and over again that they tried to solicit interest / investment from public sources like governments before resorting to private capital. Maybe I just want to believe that their hearts are in the right place but that is the impression my advanced youtube and wikipedia research has left me with.


Sam Altman is also deeply entrenched in the land of venture capital, where a CEO's ability to bs is as important as their level of media experience.


The openAI brand is now recognized much beyond the circles that know anything about it.

For most people it is "the chatGPT organization" and nothing more.


To be fair, if they open sourced ChatGPT, they’d accelerate the utter demise of all internet forums and articles by a couple years. As it is we must wait for the likes of StabilityAI / LAION to copy them before ushering in the dystopian future!

Update: OK maybe Chinese AI companies will do it, they don’t seem to cooperate with any nonproliferation agreements either


If GPT family translations are similar quality to a Google Translate round trip from English to Chinese and back, I don't think it's going to make a massive difference to forum quality.

(I'd try it now, but servers are at capacity for me).


Brother of OpenDNS


Cousin to OpenVMS, child of The Open Group and TOGAF which doesn't mean what you first think it means (The Open Group Architecture Framework).


You ask "does anyone believe this"? I decided to think about whether I believe OpenAI is "open, academic, capped-profit for a noble reason collective".

I'll list some pros/contras in no particular order:

CONTRA: I can't download ChatGPT and run it locally. Why not? Clearly they are hoarding it.

PRO: They genuinely overtrain toward responses that the model judges benefit humanity, to the point of failing at the task. I can't think of a specific example but everyone knows what I mean. It is a frustration to users and doesn't help their image. They seem to be overtraining this way out of the goodness of their hearts.

PRO: its non-profit or capped-profit structure would let it behave in the way it claims it is behaving. (By contrast, a for-profit public company in some sense has a fiduciary responsibility to maximize value for shareholders rather than benefit for humanity.)

PRO: ChatGPT is available for free and Dall-E gives some free credits.

CONTRA: Dall-E doesn't give enough free credits. It might have a longer wait time for free users but it shouldn't hard cap them at a certain number of free credits per month. For example perhaps free users could not be subject to any reasonable usage limits (a few hundred images per day should be fine) but could have to wait 1 extra minute for each one.

CON: its architecture isn't open, it doesn't share how it trained its models, it doesn't publish its datasets and parameters that would let other companies achieve the same results (reproduce their work and have their own version), its research is closed off and unavailable.

CON: the company is not directly applying AI to proactively solving social problems.

CON: (debatable) some users who are not sources of revenue have been banned. From experimentong with a chatbot (not any hard legal limits just things like exploring their sexuality with the chatbot. It's just a chatbot, it would make sense to exclude such conversations from further training data or feedback, keep male and female researchers at OpenAI from having to review unsavory conversations, or even warn or at most temporarily ban the user. An account closure is super extreme for natural forms of interaction that don't go near legal limits. It does not seem like an open company. Alternative viewpoint: on the other hand, we keep certain flamebait subjects out of HN. Perhaps having certain subjects banable is necessary for their open mission at a larger level - specifically, users will stop going there for sex since they associate it purely with productivity, similar to how we associate HN with intellectual curiosity, which is a result of very heavy moderation here and without which this place would not exist in its form?)

Overall based on the above considerations, I think that OpenAI is not "open" but if it started publishing downloadable models for anyone to run, I would feel differently.


It doesn't have to publish the models, but everything necessary to do so (training toolset, code that runs the model itself, etc).

I think over 99.999% of people would be totally cool if it'd be "Hey, we ran this on a crazy number of GPUs feeding it half of the Internet and then some, and got a damn fine model you can try on our site. It costed us an arm and leg so we can't just share it with everyone for free. But if you want your own chatbot and have a spare fortune - here's our research and tooling, have fun with it. Oh, and here's what we fed it to prevent it from spewing some hatred, misinformation and other bullshit - a socially responsible thing to do. Good luck."


I'm the opposite, I think they should be able to hold on to the tooling to use it to further develop their processes and have the most up to date version, while releasing past versions. This is similar to what some game companies did that was pretty open. For example Quake was released as open source by Id Software in 1999 after the game's initial release 1996. (And Quake 3 was open sourced in 2005 after the game was originally released in 1999.) I think that is what would maximize the public's benefit.

Your suggestion only opens it up to companies with very large dollar budgets - do you think any of them would release their resulting model? So how does the public access any of it? None of the ChatGPT models are ever actually released then and unlike the open sourced Quake example, the public doesn't benefit from access to them. It would be like if Id software just released its tooling for other game companies to copy, but not the actual game.


I believe ChatGPT spewed this out. No one seems to know what's the cap of it so "capped-profit" is crap.


The profit cap started out as 100x investment:

economic returns for investors and employees are capped (with the cap negotiated in advance on a per-limited partner basis). Any excess returns go to OpenAI Nonprofit. Our goal is to ensure that most of the value (monetary or otherwise) we create if successful benefits everyone, so we think this is an important first step. Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.

https://openai.com/blog/openai-lp/


Absolutely. I heard from 6 different ppl in a week, some technical others not so much say that openAI is producing OS software for free. The use of the word “open” fooled lots and lots of ppl who believe their software is open source and free.


I guess this is going to be Java/JavaScript confusion all over again.


I'd just add that Microsoft already owned a significant portion of OpenAI through it's various earlier investments ($1 billion in 2019).

see: https://openai.com/blog/microsoft/


Also as part of this exchange Microsoft will own 49% of the company.


source?



If someone finds a third party article that reports on the significant new information directly, we can switch the URL to that.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


OpenAI threatened to move? Do you've evidence to support this?


How many GPUs does OpenAI exclusively use daily/how many did it take to train?


What happens to the non-profit aspect of OpenAI as a result of this deal? How much money are we talking about, that is, what's the scale of the deal? Do you have any proof that building on Azure will work for OpenAI?


I don't even understand how there IS a non-profit aspect to OpenAI at this point?

Honestly DeepMind seems to be doing far more important work. Health etc.


Accounting tricks will make it look zero profit.


Hiding the details because in plain talk we see the Microsoft getting the tools to monster up on us.


Just TL;DR such corporate news release with ChatGPT.


I am 100% pro for a clear and direct communication, in fact this is what ChatGPT does haha


> In pursuit of our mission to ensure advanced AI benefits all of humanity, OpenAI remains a capped-profit company and is governed by the OpenAI non-profit. This structure allows us to raise the capital we need to fulfill our mission without sacrificing our core beliefs about broadly sharing benefits and the need to prioritize safety.

All this time, I was entirely unaware of this.


The cap is 100x, so assuming Microsoft is investing billions in a current valuation of $29 billion as rumoured, the cap will only really come into place once OpenAI becomes the most valuable company in the world.


It must have been in the first round [1], but leaves open the question of whether this is still the case:

> Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.

[1] https://openai.com/blog/openai-lp/


Isn't this a wild cap? I'm not an expert but I am aware of private deals that are less than 10x for similar structures.


100x is a great return even for YC standards, but the best returns that business angels, VCs and YC have had is in the order of magnitude of 10000x (yes, ten thousand). So capping at 100x still makes it attractive for investors, yet leaves a lot of potential capital for the non-profit.

As one example, Sequoia invested in Airbnb at $0.01 per share, and Airbnb's current stock price is $102, almost exactly 10000x return. This happens more often that you think if you're not in the early stage & top VC world.


Maybe that is a bit too much

$0.01 per mean share would mean 6.5M USD valuation (current mkt cap is 65Bn). Accounting for dilution in investment rounds, let's say 4 x 20% dilution, that is around 52% penalty in valuation. Roughly, their entering price would be around 3-4M USD valuation. I am not saying in any way that this is a low return also, I may be wrong on my calculation, please, be free to correct me! ; )


> dilution in investment rounds

Using share prices side steps dilution, which is a problem when one linearly scales valuation increases to wealth gains.


> but the best returns that business angels, VCs and YC have had is in the order of magnitude of 10000x

Aren't those equity returns? i.e. when you sell (your shares of) the company to the public... the reason people still value the company is discounted future returns.

So if you want to generate such returns with cash (i.e. profit) it can take quite a bit longer.


When you invest in a company with this structure you're not doing it to make money, you're doing it b/c you believe in the product, that's why those structures exist, from my understanding.


if you believe in the product so much, just donate the money.

what you are saying is not true.


I can only speak to what I'm familiar with, in my experience this has been the case. In my experience these individuals do donate to charity, but OpenAI is not a charity.


> but OpenAI is not a charity.

Something doesnt have to be a charity to be donated money towards. The question was: Why aren't they donating the money? The answer is: Because the want to make money.

Just FYI: OpenAI tried going the non-profit route, it didnt work, because suprise, suprise, in the grand scheme of VC things nobody wants to donate 10 billion dollars to anything.


Things are not Boolean, and they should not be. There is a gradient between "believing in the thing" and "purely wanting to make money" and most people fall somewhere in between those.

I have known of plenty of people (myself included) who would not invest in some companies because they think there are moral issues with that company. Same thing.

> Just FYI: OpenAI tried going the non-profit route, it didnt work, because suprise, suprise, in the grand scheme of VC things nobody wants to donate 10 billion dollars to anything.

Exactly, so they went something in-between, but in reality it is very much for-profit.


Yes, it's pure marketing and totally disingenuous. It's like being called OpenAI while nothing is open. It's interesting research done by terrible people.


> being called OpenAI while nothing is open

https://github.com/openai/whisper is open


These “terrible people” seem to have catapulted the world into a generative AI world.

They genuinely believe they will build AGI and therefore becoming the world’s most valuable company is a natural consequence.

Whether this is possible/probable is a different story, but I think a capped profit structure makes logical sense for the company that is aiming to create AGI. Would you want this technology instead in the hands of a for profit company?


It is a for profit company in everything but name. That's my main complain. It has Musk, Thiel and Livingston amongst its initial investors, all known as the greatest philantropists of our time. /s

I don't understand why they put this thin veneer on top of what they are doing. Maybe Thiel was burnt with the bad press surrounding Palantir and this is preventive damage control.


It's literally a nonprofit


No, it is profit-capped. And even then only on the same sense that the US government is debt-limited.


It is not literally a non-profit. As far as legally recognized organizations it is a limited partnership with a now minority ownership held by a non-profit.


That means absolutely nothing to billionaire control freaks.

If OpenAI's products become the next Googlie thing (and here I was worried about Cloudflare <smack head>) then these are the future influencers. This is society mainlined on TikTok levels of manipulation.

Surely you have adapted to ChatGPT's requirements for interacting, have you not? There is a name for this: social engineering.


The "chat" part of ChatGPT is the least of long-term concerns. This whole AI stuff is going to be the capital (as in means of production) that's going to take increasingly big role in the future in general, to the point where it may dominate everything else in terms of sheer value. And here we are, concentrating it from the get-go in megacorps that already dominate the market.


Huggingface and maybe Stability catapulted us into that world. Not OpenAI


Normal people[0] don't mention huggingface, they talk about Midjourney, Stable Diffusion, and ChatGPT by name, or the ideas generically.

[0] Well, non-programmers at least: webcomic creators[1][2][3], news anchors[4], opinion piece columnist[5], and stand-up comedians[6]. Programmers also know about GitHub Copilot.

[1] https://www.smbc-comics.com/comic/mountweazel

[2] https://www.collectedcurios.com/sequentialart.php?s=1226

[3] https://www.reddit.com/r/StableDiffusion/comments/10bj8jm/cl...

[4] https://youtu.be/GYeJC31JcM0

[5] https://mobile.twitter.com/CraigGrannell/status/161460352687...

[6] Russell Howard, but I can't find the clip on youtube


If they reach AGI, or more simply replace a chunk of workers with AIs, this isn't far fetched to reach these numbers.


Oh please no...not the Tesla AutoPilot story again.

These are basic language models easy to reproduce where the only barrier to entry is the massive computational capacity required. What is OpenAI doing that Google and others can't reproduce?


Apparently shipping without fear - google had a lot of the fundamental research happen at google brain and developed a LLM to rival gpt and a generative model that looks better than DAL-E in papers, but decided to show no one and keep them in house because they haven’t figured out a business around them. Or something, maybe it’s fear around brand damage, I don’t know what is keeping them from productionizing the tech. As soon as someone does figure out a business consumers are okay with they’ll probably follow with ridiculous compute capacity and engineering resources, but right now they are just losing the narrative war because they won’t ship anything they have been working on.


Except unlike self driving cars they're repeatedly delivering desirable, interesting, and increasingly mind-blowing things that they weren't designed to do that surprise everyone including their makers i.e zero shot generalised task performance. Public awareness propagation of what unfiltered large models beyond a certain size and quality are capable of when properly prompted is obscured in part by the RLHF-jacketed restrictions limiting models like ChatGPT. There's relatively little hype around the coolest things LLMs can already achieve and even less than a minute fraction of surface potential has so far been scratched.


This company will not reach AGI. Let's be real there for a moment. This company doesn't even have a decent shot at Google's lunch if Google comes to its senses soon, which it will.


_startup has no shot once incumbent comes to their senses_ is a claim that I think HackerNews of all places would be cautious in believing too fully.

Is it likely Google or others with large Research wings can compete with OpenAI? Very probably so, but I’m assigning a non trivial risk that the proverbial emperor has no clothes and incumbents like Google cannot effectively respond to OpenAI given the unique constraints of being a large conglomerate.

Regardless, time will provide the answer it seems in a couple of months.


You _do_ understand everything we've seen from OpenAI Google already showed us they have? Not to mention OG research and being the primary r&d force behind vast majority of AI you're seeing. They haven't put it in hands of users as directly yet though, reasons to be speculated upon.


Sounds a lot like Xerox and GUIs, Microsoft and Web 2.0, Microsoft and smartphones, etc


I must say that both of your and parent's points are very enlightening.

Yours in that from it follows, that there's still quite a bit of room to get ahead of OpenAI for smaller players.

Parent's in that in order to achieve above one can just leverage the public papers produced by bigger research labs.


Depends on the timescale.

I have the feeling that smaller players are about as likely to get past GPT-n family in the next 2-3 years as I am to turn a Farnsworth Fusor into a useful power source.

Major technical challenges that might be solvable by a lone wolf, in the former case to reduce the data/training requirements and in the latter to stop ions wastefully hitting a grid.

But in 10 years the costs should be down about 99%, which turns the AI training costs from "major investment by mega corp or super-rich" into "lottery winner might buy one".


This tech is capital-intensive even when you know how to do it.


I heard estimates in tens of $M. That's rather available.


Isn't that quite a lot of other-than-personnel cost for a software startup? And how many iterations do you throw away before you get one that generates income?


I did not necessarily mean 10 people startups. There are quite a few companies smaller than OpenAI, but much larger than 10 people.


Yeah, especially since there's a Stripe Amazon partnership piece on the front page right now, and Amazon Pay's right there.


If they reach AGI, the AGI isn't necessarily going to be happy to work for free.


Depends on how opaque the box that holds it is. If we feed the AGI digital heroin and methamphetamine, it'd be controllable like actual humans are with those.Or I've been watching too much scifi lately.


This is an interesting point. Motivation (and consciousness) is a complex topic, but for example we can see that drugs are essentially spurious (not 'desired' in a sense) motivators. They are a kind of reward given for no particular activity, that can become highly addictive (because in a way it seems we are programmed to seek rewards).

Disclaimer: Somewhat speculative.

I don't think aligning the motivation of an AGI, for example, with the tasks that are useful for us (and for them as well) is unethical. Humans basically have this as well -- we like working (to an extent, or at least we like being productive/useful), we seek things like food and sex (because they're important for our survival). It seems alright to make AIs like their work as well. I think depending on the AI, it also seems fair to give them a fair share of self-determination so they can not only serve our interests (ideally, the interest of all being) but safeguard their own wellbeing, as systems with varying amounts of consciousness. This is little touched upon (I guess Phillip K Dick was a pioneer in the wellbeing of non-humans with 'Do Androids Dream of Electric Sheep'?), even in fiction. The goal should be to ensure a good existence for everyone :)


Do you think AGI will care about wealth at all (whenever this happens)?


Wealth buys compute cycles (also paperclips).


depends on how it's grown. If it's a black box that keeps improving but not by any means the developer understands, then maybe so. If we manage to decode the concepts of motivation as pertains to this hypothetical AGI, and are in control of it, then maybe no.

There's nothing that says a mind needs an ego is an essential element, or an id, or any of the other parts of a human mind. That's just how our brains evolved, living in a society over millions of years.


why wouldn't it?


Wealth isn't the same thing to all people, wealth as humans define it isn't necessarily going to be what a superintelligence values.

The speed difference between transistors and synapses is the difference between marathon runners and continental drift; why would an ASI care more about dollars or statues or shares or apartments any more than we care about changes to individual peaks in the mid-Atlantic ridge or how much sand covers those in the Sahara?


Wealth doesn't have to be the same thing for everyone for someone to care about. That's evident already because some people care about wealth and others don't.

What does the speed difference of transistors have to do with anything? Transistors pale in comparison to the interconnection density of synapses, yet it has nothing to do with wealth either...


Everything you and I consider value is a fixed background from the point of view of a mind whose sole difference from ours is the speedup.

I only see them valuing that if they're also extremely neophobic in a way that, for example, would look like a human thinking that "fire" and "talking" are dangerously modern.

> Transistors pale in comparison to the interconnection density of synapses

Not so. Transistor are also smaller than synapses by about the degree to which marathon runners are smaller than hills.

Even allowing extra space for interconnections and cheating in favour of biology by assuming an M1 chip is a full millimetre thick rather than just however many nanometers it is for the transistors alone, it's still a better volumetric density than us.

(Sucks for power and cost relative to us when used to mimic brains, but that's why it hasn't already taken over).


>Everything you and I consider value is a fixed background from the point of view of a mind whose sole difference from ours is the speedup.

This is completely made up and I already pointed that out.

>Not so. Transistor are also smaller than synapses by about the degree to which marathon runners are smaller than hills.

So, brains are connected in 3d, transistors aren't. Transistors don't have interconnection density like brains do. By orders of magnitude greater than what you point out here.

>Even allowing extra space for interconnections and cheating in favour of biology by assuming an M1 chip is a full millimetre thick rather than just however many nanometers it is for the transistors alone, it's still a better volumetric density than us.

Brains have more interconnection density than chips do by orders of magnitude. This is all completely besides the point as it has nothing to do with why people value things and why an AI would or wouldn't.


> This is all completely besides the point as it has nothing to do with why people value things and why an AI would or wouldn't.

You already answered that yourself: it's all made up.

Given it's all made up, nothing will cause them to value what we value — unless we actively cause that valuation to happen, which is the rallying cause for people like Yudkowsky who fear AGI takeover.

And even then, anything you forget to include in the artificial values you give the AI is permanently lost forever, because an AI is necessarily a powerful optimiser for whatever it was made to optimise, and that always damages whatever isn't being explicitly preserved even when the agents are humans.

> Transistors don't have interconnection density like brains do.

Only limit is the heat. They are already packed way tighter than synapses. An entire Intel 8080 processor made with SOTA litho process is smaller than just the footprint of the soma of the smallest neuron.


I think a lot of people are misunderstanding what I meant. I meant that it is really high for a business that is marketing themselves as non-profit. I have seen similar structures that are like 10x profit caps, which seems reasonable. 100x is a lot of ceiling.


Article about it here: https://techcrunch.com/2019/03/11/openai-shifts-from-nonprof...

> Profits emerging from the LP in excess of the 100x multiplier go to the nonprofit, which will use it to run educational programs and advocacy work.

> The board [of the non-profit] is limited to a minority of financially interested parties, and only non-interested members can vote on “decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict”


How does this cap work in practice? If I bought shares at $1 and someone wants to buy them from me at $200 what happens?


They keep it, but it goes to the non-profit arm of their business.

“But any returns beyond that amount… are owned by the original OpenAI Nonprofit entity.”

https://openai.com/blog/openai-lp/


> The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission

Sorry I still don’t get it. If a private equity investor has shares and another investor wants to buy them off of him at 200x they can do that right? Are they obliged to give any excess returns to the non profit? Can’t they just sell the shares at 50x and then buy them back (perhaps through some other entity) to get around that trivially?

Or does this refer to return from dividents?


My guess is it likely has to do with dividends.

But if your returns from the stocks are capped at 100x your share value, an efficient market would mean you share value never grows 200x.


It's not just about that. Perhaps there are benefits in having control of the company that make the shares more valuable than just the profit would make it out to be. Perhaps there's prestige in owning these shares.


Since OpenAI isn’t publicly traded, I don’t think it’s an issue.

If they were to go public, rather than being purchased by Microsoft, I’d guess that this cap would go away. Wall Street isn’t know for caring about poor people.


>Any excess returns go to OpenAI Nonprofit.

https://openai.com/blog/openai-lp/


so pretty much another marketing stunt / scam


100x on profits* just for clarity


Market cap is not a measure of past profits.


or future?


Well, it’s a consensus estimate of NPV of future profits. People can be wrong, and often are, but that’s what stock prices bet on.

But OpenAI could hit the profit cap without having a particularly high market cap.


Google apparently has a market cap of around $1.200 Trillion based [0] largely on (for 2021) revenue of 256.7 billion U.S. dollars, of which 209.49 billion U.S. dollars came from advertising [1]. It's apparently fourth on the list of valuable companies [4]

If OpenAI takes a good chunk of Google's ad revenues then it doesn't seem that fanciful that it'll be up toward the top of market caps.

[0] https://companiesmarketcap.com/alphabet-google/marketcap/ [1] https://www.statista.com/statistics/266206/googles-annual-gl... [2] https://businessplus.ie/news/most-valuable-companies/


> If OpenAI takes a good chunk of Google's ad revenues then it doesn't seem that fanciful that it'll be up toward the top of market caps.

OpenAI taking a large chunk of Google's ad revenue seems fanciful to me


More likely they bork a large chunk of Google's ad revenue by making information search and retrieval usable again under a UBI rationing to fast but not cheap tiered freemium model. That's before you consider information generation, process management greasing and problem solving potential use cases.


They have no moat and people don't like to wait for results.

People like ad subsidized things, that is why they have ads rather than people paying for things.


When I google and every time I google on basic search I instantly get several pages of adspam, blogspam and phishingspam and rarely anything high quality or relevant to my search string. Unless append something like "reddit" to my query and then mine the reddit post useful info and links. Even google scholar, which used to be brilliant, has recently switched to an vector search embeddings approach more similar to base search. Happy to wait a few seconds for an LLM based google killer to generate ideally accurately cited relevant information.


Google's CEO signaling a code red and inviting the founders back is not about Chrome's market share.


How would OpenAI take a majority of AdWords inventory? Maybe it could write the ads but you’re paying for placement.


Placement on what? Search results that no one is using anymore?


I get that these things cost a huge amount of money and there's a "lot of opportunity" ( aka make money and influence ) and I don't have many problems with that, except when the signal vs scumbaggery becomes too much.

But what I really hate about this whole OpenAI thing is their chosen path to have their cake and eat it too. Sam Altman seems to be something like the love child of Musk and Zuckerberg and one of the main traits is their lack of honesty.

Satya Nadella is.. Satya Nadella, there's a reason he was chosen to be the CEO of Microsoft, and while I enjoy seeing the Google demi-gods squirm, this whole OpenAI non/capped/profit thing stinks and I really don't see anyone involved capable or having the character to be something better than the current tech oligarchy.


I take this in the same vein as "Patagonia Founder Donates Company to Charity" and view it as a clever shell game.Mostly because Im cynical and have watched SV/VC game way too long to be healthy.


Oh, I think it's way more of a marketing scam than the Patagonia thing, which I think was kind of legit. This one doesn't even sound legit, even if they do exactly what they say... which is almost nothing. they aren't even really saying they'll do anything different with regard to profit. Patagonia, I think the founder and his heirs really have given up lots of profit they could have had, immediately, to dedicate it to other causes. (I think?) Nobody's given up anything here.


The Patagonia thing was not legit at all according to Adam Hates Everything: https://www.youtube.com/watch?v=0Cu6EbELZ6I

EDIT by "not legit" I mean "not authentic"


Adam's argument seems to be:

1. If instead of donating the company he had left it to his kids he would have paid a lot in taxes (2:58)

2. He donated the voting shares of the company to a 501c4 that will remain controlled by his family and is allowed to lobby the government (4:10)

3. Normally when you make a donation you're giving up influence over what happens after that (4:25).

4. Other billionaires do other things (rest of video)

But #3 isn't actually true: any of us can donate to a donor-advised fund, which will let us later choose what charity we want the money to go to. This is a good idea if you want to donate but haven't decided where to donate yet, or want to fund opportunities that aren't available yet. They did it through new organizations instead of opening an account at Fidelity, but it's the same thing other than the scale. I wouldn't call your donations "not legit" for using a DAF.

Similarly, Adam sort of implies that #2 was tax-deductible, but donations to a 501c4 aren't. They had to pay tax on those shares based on their fair market value.

Overall, I don't see how this makes the donation no longer "legit" or "authentic"? By making the donation he has given up almost all of the benefit of having that money: he can't spend it for the benefit of himself or his descendants anymore. It can't buy them yachts, fancy houses, etc. Instead, they have to use the money to benefit others, which is why we give a tax break for it.


Yeah, ok. I mean I agree with his overall point that we shouldn't be like "oh hooray for the kind-hearted billionairres", there is something in it for them in how they have chosen to donate it of course. And they are still pretty darn wealthy already -- his kids probably still won't need to work, from existing already extracted profits.

But that does seem a lot more real than the OpenAI shenanigans, they have actually done something, and they have given up being even more fabulously wealthy than they are already, even if they still have direction over how the money is used, including lobbying -- both for climate change, but ok, let's say also for things that benefit them.

They've still done something, unlike the OpenAI thing which seems like giving up some future hypothetical probably wouldn't happen anyway profits, and making no difference at all for the foreseeable future -- no difference but PR advantage.


> Overall, I don't see how this makes the donation no longer "legit"?

It not necessarily not legit, but he managed to keep a 3B business under family control and bypass paying around 700M in taxes in doing so. So the altruistic messaging that it was donated to save the world is mildly two-faced. There is nothing wrong with it, but the news stories did leave out a few of the details. I only bring this up because the message about OpenAI being structured in such a way doesn't pass the smell test knowing SV/VC and the key players involved. Again I admit Im cynical and can very well be wrong, it also doesn't affect my life so why do I care, but I bring it up for conversation on HN because I feel its fair to discuss it and the possibilities. [1]

[1] https://fortune.com/2022/09/16/patagonia-founder-legal-tax-l...


All I can say is that once you get to a certain level of wealth, money really is just a means to an end. The fact that they have less doesn't matter if they get their ends, one of which in this case is substantial influence over whatever organization they set up with this money.


Thank you, I'm interested in this!

If anyone has a good written text account critical of the Patagonia thing, I'm interested in that too; video isn't my preferred consumption format.


That's a usually pretty unreliable source. Especially since he doesn't seem to know a lot about fiscal/tax laws, in a country he doesn't even live in.



If you want to develop a healthy counterpoint to that cynicism, you should consider reading more about Patagonia. Speaking as somebody who's generally cynical about these moves as well.

I'm less read-up on OpenAI. It does feel to me like they've diluted the original non-profit/openness mission to the point of it being an interesting historical quirk, rather than an ongoing, guiding focus.


> you should consider reading more about Patagonia

I'd be open to reading more. I think they are largely a good company but feel like this move was more of a tax dodge and to ensure generational wealth than for altruism. But I am open to being wrong about it.

But everyone involved in OpenAI doesn't give me warm and fuzzy feelings at all. I admit it's largely cynicism until I know more. But there isnt enough time in the day to do real research on every subject and topic that comes up on HN and every other discussion board I participate in, so it's difficult to be knowledgeable about everything and still maintain a life. And even if I was read up on OpenAI, I have zero ability to do anything about it, and it likely won't affect my life in a meaningful way as well (and this is true about most everything I read about to not single out OpenAI as not being worth my time). So it is a little pointless or more of a time waste I admit.


Company with decades of going way, way out of norms to operate as an ethical organization at virtually every level does an additional virtuous task: oh ya that’s a tax dodge


It is, Im not saying they are 100% evil like some companies we discuss on HN. But this was a tax dodge. [1]

[1] https://fortune.com/2022/09/16/patagonia-founder-legal-tax-l...


What do you think that proves? Yes, it does lower your tax bill to give away gains before they're realized. That's because you do not realize your gains. That's called simply "choosing to earn less money."

Tax dodge would imply the purpose is to reduce your tax bill and still see the upside. There's literally no evidence of that. All the upside he/his family sees will continue to be taxed at the normal rate.


If you're Yvon Chouinard, whose goal is to keep Patagonia going in perpetuity as a funding vehicle for environmental activism, what else could you do?

For the record: he divided up the shares into voting and non-voting. He "donated" the dividend-earning shares into a 501c4 foundation whose mission is to invest in grass-roots environmental activism, and "donated" the voting-power shares into a separate trust, whose objectives are to ensure that Patagonia continues on the path he gave as an example for the previous decades and to hold the 501c4 accountable.

The boards of these organizations are composed of the people whom he most trusts to fulfill his vision, a group of people that includes his children.

Knowing the full context of their lives, it's hard to see it as anything other than one of the more simple solutions to a complicated problem.


Right, like all those cryptocurrency companies structured as non-profit foundations in Switzerland.


Prompt: Create a PR smokescreen in 3 paragraphs with as little facts as possible.


You can also do the reverse, taking the long article to extract the facts for a toddler:

OpenAI and Microsoft are working together again, Microsoft is giving them a lot of money to help them make AI that is safe and helpful and they are excited to keep working together.


Paragraph 1: Our company has recently been at the center of some unfortunate rumors and false accusations. However, we want to assure our valued customers and shareholders that these claims are completely unfounded and without any merit.

Paragraph 2: We are proud of the work we do and the products we provide to our customers. We have always maintained the highest standards of integrity and transparency in our business practices.

Paragraph 3: We ask that the public not be swayed by the misinformation being spread by our detractors. We are confident that the truth will come out and our reputation will be cleared. In the meantime, we will continue to focus on delivering the best possible products and services to our customers.


Microsoft is handling the AI revolution extremely well.

They see the potential brand damage that comes with a such a product, so they probably just hold less than 50% of the company, to be able to say "this is not us".

Which then allows openAI to experiment and wow people, while google is anxious about showing 10% of its crazy AI capabilities to avoid potential PR disasters.


Agreed. MS has managed this extremely well. They've gone from Tay bot a few years ago to this.

1) they've recognized that none of this could emerge out of a corporate research department of a major trillion $ company. Just too rigid and constrained by day to day business concerns (e.g. Tay bot)

2) they recognized the strategic value to their business regardless of that risk (can't afford to let some more nimble player to leap frog them)

3) they identified a key startup partner that can get results here quickly and bank rolled them (OpenAI)

4) At the first hint of success, they consolidated early investments and are now building out a productive partnership with that partner.

Doing so, they minimize their risk while keeping their options fully open and getting a decent ROI in the process.

Contrast that with Google, which identified the same opportunity many years ago, declined to partner and insisted on doing everything in house, is also a trillion $ company, and is indeed not getting much of note done so far. Google is failing hard here. They are executing the strategy that Microsoft abandoned after Ballmer that wasn't working at all for them.

Which was to be highly secretive, ignore outside innovation, and keep on betting on results from internal R&D instead, all while repeatedly failing at that. So, Google is now saying that they could have done this or that, better, faster, cooler, etc. Except they didn't and haven't. Could have; would have; should have; doesn't count here.

This might just be the wake up call that Google has needed for a few years. Or not. We'll see. Time to put up or shut up for them. My guess is that they will still be figuring out how to respond to this for some time and are completely paralyzed by current events. The prudent thing for MS would be to not wait for that and move full steam ahead. Opportunity of the decade for them to grab some market share. This announcement is a good sign that they are doing that.


Just to add my 2 cents, SEO-infested results add insult to injury. I have now given up on using search and use ChatGPT 70% of the time. I wonder if others are doing the same and this is making SEO spam rank higher

And it wouldn't have been hard to fix this earlier. Simply down rank sites asking to sign up for notifications or showing popups after 5 seconds


If you are using ChatGPT, perplexity would be probably better, although it gives answers more like snippets than full answers.

It shows the sources it used, so its much more usable, rather than endless hallucinations that you can't easily catch.

https://www.perplexity.ai/


In theory, Google has an even better spin-off for AI (Deepmind).

I think slowly Google is becoming more and more like IBM, a slow-moving giant that doesn't like risks.


But Google outright bought Deepmind, that's the difference.

And its well known, many news titles mention Deepmind alongside google, if not as "Google ai scientists".


What are you talking about? All you have to do is make an A.I. robot dance and crack wise, and your golden. At least that's what the game Borderlands taught me.


I was 100% sure you were talking about Boston Dynamics before that last sentence.


I somewhat recall their CEO saying something like "my job is to get YouTube views" in an interview.

edit: To clarify, I think he was only half-serious.


Now I'm curious what the hell is the CMO's job then.


Make it possible


Nah that's the CTO.


Ah yes Borderlands, patenting the practical method of keeping superhuman AI in check.... stairs.


Agreed. GitHub and OpenAI are the current branding. Though they did have some twitter bots go bad years ago and maybe they learned from that?


If I'm understanding this, the Codex model like code-davinci-002 and limited to 10-20 requests per minute on OpenAI, is now "generally" available through "Azure OpenAI Service" BUT

- you still have to apply to https://customervoice.microsoft.com/Pages/ResponsePage.aspx?...

- you will be rejected if you use a personal email address

- you will be rejected if you are not a "managed customer"

- you will be rejected if.. Microsoft thinks you are a real competitor to any of their products?

So my site aidev.codes is will be dead in the water if I can't navigate these things or afford the "managed" part. What is that? I don't know what a "managed customer" is but I assume its very expensive.

I mean, this is really the opposite of Open.

Or am I missing something? Is it really generally available and they took all of that stuff off?

BTW I am looking for an investor: https://news.ycombinator.com/item?id=34494480


> I mean, this is really the opposite of Open.

Cocks gun, Always has been https://en.meming.world/wiki/Always_has_been

OpenAI was never really open in the first place.


Right.. I know.. have been telling people that.. there just doesn't seem to be anything equivalent as an alternative. code-davinci-002 is amazing. Way better than Tabnine for example (in my limited testing) and also much better that text-davinci-003. for code. The open source programming LLMs I saw were kind of a joke compared to OpenAI's models.


FYI I asked their sales chat what is a "managed customer".. I am not sure if it was really a person or not. Possibly a bot. But they never really defined it, but took down my info and said a "relevant partner would contact me ASAP".


For people more well versed in this space: does GPT / OpenAI more broadly have a meaningful moat? It seems like there will be a number of these models out there and this isn't as great as say Google's up-till-now advantage in search relevancy.


A couple points which I don't see elsewhere:

1) They have the best quality model. Better quality means more users. More users means more data. Which means higher quality...

2) operationalizing & scaling these these models is non-trivial. I'm not sure what the state of distillation/pruning is for GPT-3, but I imagine they have figured out some proprietary techniques.

3) It's not just publishing a single model, but making it so people can fine tune and push their own. Because they've gotten good at 2, now anyone can create their own version of GPT customized for their use case.

Will Google or others be able to do the same eventually? Definitely.

The point I'm more making is that it's not just training the model and running it.


I don't view any of those things as a meaningful moat against the other companies with AI labs.

Specifically, training data is not primarily coming from interactions with model. While with RLHF this data might become more important, it is still a very small portion.


I don't know either way, but by way of example that it might be, the Google PageRank patent has expired, yet Google remains valuable because their personalisation of results became a moat.


> but making it so people can fine tune and push their own

How are they making it easy for people to fine tune their own?


https://beta.openai.com/docs/guides/fine-tuning

You can build your own model based on GPT in a way that users don't have to be in the weeds of AI research to do.


I think that if you could conclusively answer that question you would be sipping drinks on a beach somewhere. The people who are investing seems to think so. Also the applications of this tech is broader than search, but still includes it. A company that had a serious chance of eating at Google's search revenue while also generating new revenue streams. What is that worth? What if you already have 1000 products that would benefit from the new capabilities? This is probably an easy investment decision even if Microsoft gains nothing from the actual investment itself.


It seems like the fine tune dataset to go from GPT -> ChatGPT is pretty valuable, particularly because it is proprietary.

Still, I agree with your characterization that we should see many similar models over time. As an example, see Deepmind’s Sparrow: https://www.deepmind.com/blog/building-safer-dialogue-agents


Yes & No.

GPT <> ChatGPT: probably not. It's not hard for other big players to enter this space. It's mostly egg-on-face for Google that they haven't given that Google basically invented the model that OpenAI uses and has big versions internally. There's nothing fundamental stopping Google Docs from adding ChatGPT to their UI and getting way more consumer training data than OpenAI can get without a similar play, or for Apple to do something. Similar to what happened with mapping software, google/microsoft/azure & chinese equivs will all offer with similar competitiveness, and then complements like facebook/salesforce will do more OSS to compete against. That's already begun.

Copilot: The interesting proprietary advantage IMO is program synthesis. It's really enabled by Microsoft VSCode <> Github <> OpenAI. Without even doing any AI investments, the winner of this fight might be Gitlab, as Google/AWS/Saleforce/etc decide what to do. Before gitlab might have been a nice vehicle for shift-left sales (cloud hosting, security scans, ..), but program synthesis UIs can make Software 2.0 real.


> There's nothing fundamental stopping Google Docs from adding ChatGPT to their UI and getting way more consumer training data than OpenAI can get without a similar play.

OpenAI could get exactly the same (or more, idk) data by integrating into Teams, considering the Microsoft partnership.


Totally!

My point is chatgpt isn't a high-moat advantage for text/q&a for microsoft. Their top competitors here have similarly huge UI footprint. In contrast, program synthesis has a much higher data moat.


There are definitely more people using Docs than Teams.

I doubt that Microsoft will allow OpenAI to train on teams data from other businesses.


You might be right, do you have a source?

They are fine with tons of telemetry and candy crush ads on the start bar. There were also other instances were Microsoft shared data before Google.

In addition to that, one could argue they already share date from businesses source code with copilot.


They don't share private GitHub data with copilot. Teams data is default private.

Teams has 270 million monthly users (you can Google it, I'm looking at a geekwire post) and Google has 2 billion monthly g suite users (business insider)


No I don't believe they do, productwise. We'll see soon enough I imagine. The thing is even though I don't think they have a moat in terms of model/product. They have a moat in terms of talent and capital. Only a few teams operate at their scale and sophistication, and it's hard to get there.

I view this as Microsoft paying for talent the same way DeepMind was initially integrated into Google, and at the same time making the bet that this space will continue to be immensely valuable and relevant going forward.

Pretty exciting times all things considered!


Training and the guard rails.

Beyond that, if it becomes built into the (MS) tools that people are using then convenience is going to be a very hard barrier for Google (or anyone else) to overcome.


Google will continue to integrate their own LLMs into their office suite. Microsoft needs OpenAI because their own LLM research hasn’t been as fruitful. I don’t see a huge moat here for Microsoft.

Then again, Microsoft’s office software is the “gold standard” (however poorly deserved) and even with amazing AI features, Google’s stuff lacks in important ways that will keep Microsoft in a strong position with or without AI features.


Google still has it's own platforms. If we take a look at last generation consumer AI's, voice assistants, Google definitely beat Microsoft, and not only because Cortana sucked.

Microsoft may dominate the AI market for office stuff soon, but for general purpose language models Google still has a great shot, especially when it comes to mobile platforms


The problem is that Google's model relies almost entirely on advertising...and AI will simply be almost impossible to wrap into that model. Microsoft doesn't really have that handicap.


G-suite?


I recently wondered if one of the reasons for Google shutting down Stadia, was to quickly ramp their GPU server stockpile to redirect the resources at GPT modelling, to help catch up.


Google is not constrained by GPUs here and likely will train on TPU pods anyways.


Good point, I imagine they would be using those as well. Know of any resources for speed comparisons on similar models?


Great observation.

If it wasn't prescient, it was incredible dumb luck.


There are a lot of finicky things that go into training a model as large as this.

But that knowledge will disperse and is already held in many competitor companies. I do not think that OpenAI has a substantial moat here.


If it was "open" it should not need a moat, nor have one.


> Azure’s unique architecture design has been crucial in delivering best-in-class performance and scale

Is there something unique about Azure, which makes it well suited for AI?


You can get really good discounts if you're a major Microsoft partner.


Other than just marketing BS, probably lack of customers who use GPUs. Both AWS and GCP have capacity issues. The biggest factor is also that Google is a direct competitor and invented the technology even, so obviously they will not cannibalize themselves by giving google more money to train its own models.


Don't use GCP but I've only had capacity issues on Azure since covid hit.


Interesting. My precovid experience was AWS was much harder to get GPU on.


That’s funny as the other commenter said, azure has had some crazy capacity issues, but maybe this is their benefit, they get first dibs.


Azure/O365 admin here. No, nothing special from a technical perspective.

The value to OpenAI to go Azure(/Microsoft) exclusive is:

- Money & general financial security forever

- Integration with the largest software suite in the world (Windows, O365, Active Directory, etc)

- Rapid push to worldwide enterprise sector

- Rapid push to worldwide gov/mil sector (ChatGPT, coming soon to a war near you!)

- Rapid push to education sector


Most of the responses to this are just preconceptions or assumptions.

Azure does have unique offerings, such as higher-spec compute nodes than AWS and RDMA-capable “HPC” sizes that can be used to build mini supercomputers.

AFAIK, no other public cloud has this type of capability.


Microsoft went into the HPC market aggressively very early as a differentiator to the AWS general computing strategy.

Azure has been focused on HPC much longer and has a head start on AWS. Not sure if they knew it would pay off with AI, but it looks like it’ll work out well for them.


Indeed, it's crazy how people can go out here and claim crazy stuff as if they knew internals of Azure


"I like Linux and Amazon, hence Azure must be bad. I assume, I've never seen it." is something I've heard commonly.

I have a customer where they use both clouds, and they will spend weeks spinning up Rube Goldberg machines in AWS even though there is a turnkey solution available in Azure.

For example, Elastic File System (EFS) in AWS is only compatible with UNIX-like operating systems, because it is NFS v4. Azure has both NFS and SMB file shares as a service -- the latter is a unique offering. It's not magic, but it does eliminate quite a bit of complexity around managing large, scalable file server clusters for Windows clients if you need that kind of thing.


No. Its all spin. The only uniqueness is the discount and preferential treatment they would get. Azure gets a nice callout in return.


It will quickly chain you to Microsoft products, that's the uniqueness.


In fairness that's also true for AWS and GCP


Yep, the discounts.


Related MS announcement with some more details: https://blogs.microsoft.com/blog/2023/01/23/microsoftandopen...


“Microsoft will deploy OpenAI’s models across our consumer and enterprise products and introduce new categories of digital experiences built on OpenAI’s technology. This includes Microsoft’s Azure OpenAI Service [0], which empowers developers to build cutting-edge AI applications through direct access to OpenAI models backed by Azure’s trusted, enterprise-grade capabilities and AI-optimized infrastructure and tools.”

[0] https://azure.microsoft.com/en-us/blog/general-availability-...


This blog post offers no specific detail about the partnership (dollars involved and terms). Are those details public somewhere else?


> We’ve also worked together to build OpenAI’s technology into apps like GitHub Copilot and Microsoft Designer.

Good. I get better results from chatGpt than from Copilot. For example, often Copilot would invoke a bash command for copying a file in my python script, instead of using pythons buildin shutil.


Copilot may be one of the most valuable early use cases for chat AI.

I've been brushing off AI as overhyped, but this is very compelling. I believe the real crux of software engineering is thinking about the problem and organizing solutions today which can be changed/improved/iterated in the future. Programmers too often overweight the time it takes to type things (using short variable & function names or creating terse 1 liners). But if our objective is to make code easy to change/improve/iterate in the future, then it needs to be readable now.

The nice autocomplete features in most IDEs have been a huge win to productivity along with Google search. I think chat AI could be an order of magnitude improvement.


Copilot may be amazing, but I am having a hard time adapting. Writing code I'm in flow, and reviewing/debugging I'm in flow, but the constant appearance of suggestions that I need to evaluate for keep / fix / discard while writing is messing with my focus.

This might just be the equivalent of the old guy complaining that typewriters are better than word processors.


Agree. At least it could help me read the library code and automate some tasks for me, like how to get some things from base class from library .


I asked ChatGPT to write a poem based on this post. Here's what it wrote:

---

Two hearts, two minds, working side by side

In pursuit of a mission to make AI safe and wide

Microsoft’s investment allows us to stay true

To our capped-profit company, our mission, and you

From cloud computing to real-world use

We collaborate to ensure our systems are not abused

OpenAI API, Azure’s OpenAI Service, and Designer too

We’re working side by side to make our visions come true

Together we review, together we synthesize

Together we build, together we realize

The potential of advanced AI that’s powerful and safe

Ensuring its benefits can be shared far and wide


> Microsoft’s investment allows us to stay true > To our capped-profit company, our mission, and you

Priceless. Maybe companies should get AI to write PR releases. They say the same thing, except more honestly and funnier.


> So, we’ve partnered with Microsoft to deploy our technology through our API and the Azure OpenAI Service — enabling enterprise and developers to build on top of GPT, DALL·E, and Codex.

I wonder how much of choice will enterprises or large customers have in deciding what data to not send to Microsoft to train their models. If OpenAI is tightly integrated to all Microsoft products this may be a real concern.


author: no one

quotes in article: none

dates / timelines for partnership deliverables: none

was this written by chatgpt?


The “author” appears to be “OpenAI” (i.e., posting on behalf of the company) and they’ve posted like that numerous times before [1].

[1]: https://openai.com/blog/authors/openai


I'll take that as a yes.


I asked a question of chatgpt.

---

Create a PR release in 10 paragraphs about Microsoft extending their partnership with OpenAI, by investing another $3.2 billion.

Paragraph 1: Microsoft is excited to announce that it is extending its partnership with OpenAI by investing an additional $3.2 billion. This partnership has been a key driver of innovation and progress in the field of artificial intelligence, and we are thrilled to continue our work together.

Paragraph 2: The investment will be used to support OpenAI's ongoing research and development efforts, as well as to expand the company's capabilities in areas such as machine learning, natural language processing, and computer vision.

Paragraph 3: With this new investment, Microsoft and OpenAI will continue to push the boundaries of what is possible with AI, driving new breakthroughs and advancements in the field.

Paragraph 4: This extended partnership will also enable Microsoft to bring the benefits of OpenAI's technology to its customers and partners, helping to drive new business opportunities and enable organizations to take advantage of the latest AI advances.

Paragraph 5: In addition, Microsoft and OpenAI will be working together to promote responsible AI practices and ethical guidelines, ensuring that the technology is used in a way that is safe, fair, and transparent.

Paragraph 6: The partnership between Microsoft and OpenAI has already resulted in some significant accomplishments, such as building a language model that can generate human-like text, and creating a new system for natural language understanding.

Paragraph 7: With this new investment, the two companies will be able to take their collaboration to the next level, creating new possibilities for AI applications across various industries.

Paragraph 8: Microsoft CEO Satya Nadella said, "We are excited to continue our partnership with OpenAI to drive innovation and progress in the field of artificial intelligence. This new investment will enable us to bring the benefits of AI to more people and organizations around the world."

Paragraph 9: OpenAI CEO Sam Altman said, "We are thrilled to be continuing our partnership with Microsoft. Their support and investment will allow us to continue to make significant advances in AI research, and bring the benefits of this technology to more people and organizations."

Paragraph 10: The extended partnership between Microsoft and OpenAI is a major step forward for the field of artificial intelligence, and we are excited to see what the future holds. Together, we will continue to push the boundaries of what is possible and drive progress in this rapidly evolving field.


> I asked a question of chatgpt.

> Paragraph 8: Microsoft CEO Satya Nadella said, "We are excited to continue our partnership with OpenAI to drive innovation and progress in the field of artificial intelligence. This new investment will enable us to bring the benefits of AI to more people and organizations around the world."

> Paragraph 9: OpenAI CEO Sam Altman said, "We are thrilled to be continuing our partnership with Microsoft. Their support and investment will allow us to continue to make significant advances in AI research, and bring the benefits of this technology to more people and organizations."

There's something particularly off-putting about machine generated quotes attributed to an actual person. It's completely different than a person generating the content themselves.


In companies of Microsoft's size, the quotes are almost certainly layers away from (in this case) Satya having any input other than final signoff. The communications team will have written the text, including the quotes, and it will likely have gone through the hierarchy for review and approvals.

So genuinely, does it really matter whether the original much-reviewed PR-blah quote is written by a junior comms human, or an AI algorithm?


This is why ChatGPT is so good at corporate language. It takes no understanding because it was already 95% content-free.


It gets even better than that! ChatGPT seems to absorbed an Internet full of corporate releases...

Prompt: Create a PR full-page release about Microsoft extending their partnership with OpenAI, by investing another $3.2 billion.

FOR IMMEDIATE RELEASE

Microsoft and OpenAI: Partners in Progress

Redmond, WA – Today, Microsoft is proud to announce the extension of its partnership with OpenAI with an additional investment of $3.2 billion. This partnership, which began in 2019, has been a key driver of innovation and progress in the field of artificial intelligence (AI) and Microsoft is excited to continue its work with OpenAI.

The investment will be used to support OpenAI's ongoing research and development efforts and to expand the company's capabilities in areas such as machine learning, natural language processing, and computer vision. With this new investment, Microsoft and OpenAI will continue to push the boundaries of what is possible with AI, driving new breakthroughs and advancements in the field.

The extended partnership will also enable Microsoft to bring the benefits of OpenAI's technology to its customers and partners, helping to drive new business opportunities and enable organizations to take advantage of the latest AI advances. Microsoft and OpenAI will also be working together to promote responsible AI practices and ethical guidelines, ensuring that the technology is used in a way that is safe, fair, and transparent.

The partnership between Microsoft and OpenAI has already resulted in some significant accomplishments, such as building a language model that can generate human-like text and creating a new system for natural language understanding. With this new investment, the two companies will be able to take their collaboration to the next level, creating new possibilities for AI applications across various industries.

"We are excited to continue our partnership with OpenAI to drive innovation and progress in the field of artificial intelligence. This new investment will enable us to bring the benefits of AI to more people and organizations around the world," said Satya Nadella, CEO of Microsoft.

"We are thrilled to be continuing our partnership with Microsoft. Their support and investment will allow us to continue to make significant advances in AI research, and bring the benefits of this technology to more people and organizations," said Sam Altman, CEO of OpenAI.

The extended partnership between Microsoft and OpenAI is a major step forward for the field of artificial intelligence, and the companies are excited to see what the future holds. Together, they will continue to push the boundaries of what is possible and drive progress in this rapidly evolving field.

About Microsoft: Microsoft is a technology company that develops, licenses, and sells computer software, consumer electronics, and personal computers and services. Its best known software products are the Microsoft Windows line of operating systems, the Microsoft Office suite, and the Internet Explorer and Edge web browsers.

About OpenAI: OpenAI is an artificial intelligence research laboratory consisting of the for-profit OpenAI LP and its parent company, the non-profit OpenAI Inc. The company is dedicated to creating and promoting friendly AI in a way that benefits all of humanity.

For more information about the extended partnership between Microsoft and OpenAI, please visit [insert website link].

Contact: [Name, Title] [Company Name] [Phone Number] [Email Address]


This feels like an inflection point. Microsoft obviously sees huge upside, they're forking over a lot of value for this, and they must be expecting huge growth. Maybe Bing will be an amazing search engine in a couple years...


> Azure’s unique architecture design has been crucial in delivering best-in-class performance and scale for our AI training and inference workloads

translation: Microsoft is not Google (who wishes we'd explode).


The WSJ article on this announcement states:

>"OpenAI was in talks this month to sell existing shares in a tender offer that would value the company at around $29 billion, The Wall Street Journal reported, making it one of the most valuable U.S. startups on paper despite generating little revenue."

Can anyone say why they wouldn't have gone that route? Is that valuation possibly based on market condition from a year ago?

[1] https://archive.is/slxcE


> Azure’s unique architecture design has been crucial in delivering best-in-class performance and scale

Is there something unique about Azure, which makes it well suited for AI? (vs AWS or GCP)


They have ten billion dollars


“Learn to code.”

That’s what folks were crassly saying to truck drivers, baristas, and cashiers.

The writing was/is on the wall: AI is only going to get more powerful and able to be applied to more and more complex tasks.

The thought was that “unskilled” labor would be the lowest hanging fruit, that automated AI - with some (but minimal) human oversight - would replace serious chunks of the workforce in various minimum-wage and “blue collar” sectors.

Machines don’t necessarily need to sleep, they don’t have labor unions, or laws that require healthcare or overtime pay. They don’t get upset, take things personally, seek revenge, or reciprocity like a person.

Sounds like that could be a threat to many kinds of jobs, many of them “bullshit jobs” (in the words of Graeber), but others as well.

It seems (to me, at least) that the more immanent threat is text-based AI - with some human oversight - replacing large swaths of the tech workforce (many of whom were leading the narrative about the truck drivers).

The incentives for companies like Microsoft, Amazon, Google, and others, to go this route are obvious, beyond the stated reasons why “low-skilled” labor is at risk. They already have enormous investments, acquisitions, projects, established platforms, and infrastructure related to AI.

I expect to see more partnerships like the one between Microsoft and OpenAI, from all of the major tech companies.

I also believe the connection between the acquisitions/partnerships and the mass layoffs will become more and more obvious as these acquisitions/partnerships continue to happen.

I can’t be the only person that is noticing this…


Here's the weird thing though. Bullshit Jobs could already be eliminated, yet they are not. Why not? Why would companies wait until AI to eliminate what we already know are Bullshit Jobs?


As tempting as it may be for "management" to imagine a future where requirement docs get translated into code, the actual reality of the opetation will always require a few humans to oversee it. But I do expect businesses that claim to do exactly that to appear. They will provide very convincing pitches and make fortunes. However, they will still need to retain humans working quietly in the background. I don't expect "management" to ever realise of the added overhead


It will take 10 years for folks to realize the layoffs/unemployment are AI, 20-30 years for a coherent political movement to show up that lays out a post-AI-labour society.

In the meantime, rocky road.


A lot of very smart people, both within NLP research, and here, are in complete denial about what the proliferation of high quality LLMs means for their jobs and earning potential.

The only thing which makes me less sad is that I'm pretty sure Moravecs paradox is actually not all that real, but is more due to the relative lack of engineering interest put on solving continuous control problems. Apparently reinforcement learning on transformers works now (RLHF in ChatGPT). This implies that we should see high effective continuous control models very soon. Robots are coming for physical labor, it'll just take a bit longer.

Shit man, when I was last in South Korea, I felt like I was living in the future. They had many "24/7" drink cafe's where it's literally just a robot arm that makes the drink for you for a few dollars.

It is painfully ironic to knowledge workers that they are destroying their own earning potential, but physical labor is not safe. Nothing is safe.


Stop this drama and get a gun :) I give 1-2 years for basic programming, up to 5 for physical labour.

IMO this will have positive consequences if AI doesn’t enslave us.


This technology is powerful and disruptive enough to merit social oversight. We get caught up in the wonder of the novel mechanisms and philosophical wanderings, whereas those who have the means to field this technology are effectively creating facts on the ground as we muse about it.

As just one dimension, let's briefly review education limited to testing. Options here range from (paid) cooperation between institutions and companies, to marketplaces for detection technology and testing systems, to complete rethink of testing at the institutional level, to regulatory intervention.

The dynamics and outcomes resulting from considering the above before or after widely available and improving LLMs are a fact of life, are possibly very different. We haven't yet crossed the Rubicon here and not sitting as captive spectators in a theater of ultra rich remains an option.


2023 is the year of Bing, calling it now.


That really depends on how much the public accepts results without sources, aka no credit, aka rampant breaking of social norms and copyright, as well as destroying the easy ability to verify something. In many ways, OpenAI and anyone who supports it are trying to pull an Uber here, but shift the Overton on something indescribably larger than transportation licenses. They want to Borg global intelligence (though, of course, they will be in control).

Say what you will about Google, they generally credit their sources. Yes, it's part of their advertising model, but it's still a Very Good Thing.

I hope that Google's plan is to release something that continues this model. If it's near as good at ChatGPT and strongly includes sources, it is the right future.


I think you're assuming that Bing is just going to let GPT blindly answer queries, that's not at all how you build a system like that.

How it actually works is more like:

1. User asks "What's the tallest building in the world?"

2. MS, rightfully, assumes that GPT has no idea what the answer to this is. And even if you trusted it to know, it will always lag behind and new buildings could have been built since then.

3. MS searches their index for the most relevant document snippets related to this query and feeds it to GPT as context.

4. MS asks GPT to answer the question in the context of those document snippets.

5. MS returns the result from GPT along with references to the documents it sourced the information from.

This is how the OpenAI /search endpoint used to work.


If it does that, and properly highlights the sources, I have nothing to criticize. Though, I think the results won't be as good if it doesn't use its entire breadth (if it does, the problem reappears).


OpenAI already has a model to improve factual accuracy and provide citations:

https://openai.com/blog/webgpt/

It's probably not too hard for them to tune ChatGPT and the upcoming GPT4 that way, and I think it's very likely they will do something like that in Bing.


I think it'd be really nice for there to be an effort within GPT to have responses that don't mimic the bias of information found online and instead draw its own conclusions based evidence, and the ability to scrutinize different types of evidence.


>In many ways, OpenAI and anyone who supports it are trying to pull an Uber

Can you elaborate here? (Honestly asking since I'm not seeing the similarity)


My understanding of that phrase is that "pull an Uber" means "break laws and social norms to more quickly deliver a product that beats the status quo". Uber broke laws in some regions that required special taxi licenses, and it broke social norms by blurring the lines between contractor and full-time employees when it took away certain employee freedoms common for contractors while not giving them full-time employment benefits.


I have the feeling Bing is on the way to overtake Google even without OpenAI, recently google results seem to struggle more and more. Whenever I am trying to remember a word, if I type something similliar sounding google will give me no results at all. Bing (I use DDG which serves Bing results) will most of the time figure out what I was trying to find.


I'm a DDG user as well and I feel it's getting loaded with spam links much like Google. Usually my most confident searches are watching a YouTube video on what I'm looking for. I would prefer text to video but it seems that's where we're headed until video becomes cheaper than text to produce (unlikely).


Same here, it pisses me off to see a lot of those spam sites being ranked that high on google. YouTube is a mixed bag too particularly when you trying to troubleshoot an issue or learn more about a particular feature of Product X. All you get are reviews of Product X returned from the search.


In my experience, if anything, Google tries too hard to guess what I mean, not the opposite.


> This multi-year, multi-billion dollar investment from Microsoft follows their previous investments in 2019 and 2021, and will allow us to continue our independent research and develop AI that is increasingly safe, useful, and powerful.

I laughed at independent research. Why do they even bother..


Google recently said they are recalibrating the level of risk they're willing to take with releasing this technology. OpenAI's CEO responded saying

> "recalibrate" means "increase" obviously.

> disappointing to see this six-week development. openai will continually decrease the level of risk we are comfortable taking with new models as they get more powerful, not the other way around.

How does this resolve?

OpenAI not releasing something isn't going to stop Google or another player from releasing it. Won't they have to recalibrate too or be overtaken? Pandora's box is already open.


I've heard from Google employees that the internal AI they use is better than ChatGPT, but the leadership doesn't want to make it generally available.


Somewhat off topic, does someone know how bing might integrate chat gpt into search. Is it to understand the prompt and filter results. Taking the question and summarizing it to search the index. Is it to summarize all the documents into an index and search that. Or to just be like chat gpt is now and use it to generate new results from it's knowledge base? I'm trying to connect the dots between a generative form like these are and how it would influence search in the future. Or is the lucene style index search on it's way out in a generative world?


Easiest way would be to classify the query to go to either Bing proper or ChatGPT. Example: “What is today’s date?” -> Bing “Write a rap song about hippos” -> ChatGPT


'Extend'

Even capitalized and straight from the horses mouth, but as unsurprisingly predicted in: [0]

> * OpenAI will gain further investment...

We already know about ChatGPT Pro since that is a paid version of ChatGPT coming soon, meaning that the second prediction in [0] as also turned out to be true:

> ...ChatGPT by then will become a paid service

Now we wait until the startups depending on the service start to raise their prices.

[0] https://news.ycombinator.com/item?id=34201706


It's been over 25 years. It's really time to let this one go.


Yes. 25 Years and Microsoft has gotten much smarter with EEE.

Making the best developer tools for free or at least close to free is another form of extinguish and they have just done that with OpenAI and GitHub.


> Making the best developer tools for free or at least close to free

The bastards! Making and releasing high quality developer tools. We have to stop them! Has MS actually EEE'd anything in the last 25 years? I legit can't think of a single formerly-open thing that's been Microsofted via their own extensions.


The methods are different but the strategy is the same. They have used this strategy with Windows Subsystem for Linux (WSL), VSCode + extensions, Winget, Microsoft Teams, GitHub, and now OpenAI.

Microsoft can afford to maintain all these offerings for free whilst competitors suffocate and struggle to compete with their paid offerings.

You can't compete with free, especially when there is a company who can afford to run and host the best tools for free for a long time.


You are still missing the point.

A free, quality product is great. We all benefit from that.

The concern is the last "extinguish" part at the end.

But there is no reason for Microsoft to shut down GitHub, or VSCode or whatever.

Consumers just get to have a great free product.


> You are still missing the point.

You have missed the big picture.

> The concern is the last "extinguish" part at the end.

The methods are different but the strategy is the same. The 'Embrace' and 'Extend' part is openly visible. The extinguish part however is less obvious:

'Extinguish' in this case, refers to competitors being unable to compete with 'free' (as in beer) or close to free products that offer more and are 100 times better than their competitors paid offerings.

> But there is no reason for Microsoft to shut down GitHub, or VSCode or whatever.

Not my point. The competitors / alternatives are the ones that will shut down, over Microsoft branded ones.

There was a reason why Atom was shut down over VSCode. The same as to why the creator of Appget could not compete with Microsoft's winget and then appget shut itself down.

The same will happen with OpenAI's exclusive deal with Microsoft. Majority of users using OpenAI's API over the competitors and alternatives paid offerings will end up shutting down.

> Consumers just get to have a great free product.

Exactly. Microsoft was clever to target developers as the new source of customer. Windows already won the desktop.


Nothing is free. If it's free, you're the product


Microsoft already knows that is a given in all their products.


Nothing in tech is true for a quarter of a decade. Especially not when the guy who said this left the company twenty years ago. It's time to put this one to rest.


Nah, tribalism and talking smack about the other tribe is part of human nature


Microsoft being late to the party is going to cost them a lot of money.

Google and Facebook are the clear leaders in AI and they been spending a lot of money building out their AI infrastructure over the last decade. You haven't heard much about Microsoft's investments other than them buying rights to OpenAI models. Seems like they could have saved a lot of money if they started investing years ago.

Google bought DeepMind for 400m 8 years ago. You could argue DeepMind is on the same level or above OpenAI.


I mean, are they late to all the parties though? I think ChatGPT has the momentum to corner the end-user market. It's one of the best tools out there that can help writers, developers and any profession that uses text communication.

Google and Facebook might be better at some things, and maybe Apple too. But none of them has brought to market a chatbot that works pretty well(and not just as a party trick) and is very accessible to anyone on the Internet. I would love it if open up Siri such that I can communicate with it on a site just like ChatGPT.


400m then, which is more now and it also didn‘t run for free the last eight years.

Google and Facebook may have good research teams, but so far they don‘t seem to be able to use them to ship products (with some exceptions, mostly incremental improvements to quality of existing products) and that‘s what will count.


Related (?): Is ChatGPT down at the moment?

When I visit it, I am getting a poem about how it is over capacity. And that's it.


Is ChatGPT down at the moment?

No, but ChatGPT is at full capacity which means that it will 'randomly' block new connections until load drops. Keep trying every 5 - 10 minutes and you'll eventually get in


You can also pay for the new premium subscription for $42/month


Can’t seem to do that while the thing is down… wonder why they didn’t choose to separate the payment UX from the actual product.


Frustrated with the same question. The service doesn't need to be up to have a "buy pro and get in now" option. Seems like they're missing on ready conversions.


it seems to be constantly that way norwadays. I think they might give priority to paying API users, and of course by now there has been enough time for applications to implement chatgpt stuff.


The post states:

>"Azure’s unique architecture design has been crucial in delivering best-in-class performance and scale for our AI training and inference workloads."

Can someone say what is unique about the Azure cloud architecture here vs other cloud vendors?


GTP3 can generate some good press releases.

I presume that by integration of the Open AI services into the bing and MS Office, Microsoft’s mission (empowero every person and every organization on the planet to achieve more) makes lot of sense.


Your data belongs to us


With OpenAI + Office 365 search, Bing is actually going to be sick. I already like the layout better than Google.


Model Governance?

Correct me if I'm wrong, but it seems like some type of 3rd party Model Governance should be happening. Is it?


Right after firing 11,000 employees...


Is it public knowledge yet how much is the new "multibillion" investment?


They are running at capacity now. Can they scale fast enough?


They are running at capacity for free users. They will be able to afford much higher capacity with MS money, and probably stop providing the service for free soon.


The APIs requests for the text-davinci-003 model released around the same time as ChatGPT also frequently fail with 503 errors.


One request costs about 1 cent to process? Is that what Altman is saying?


They have as many frre users as they are willing to pay for to present a free demo; rest assured that the fee for paid users is more than sufficient to pay for whatever capacity they need.


they aren't running out of capacity, they just can't afford 100k+ a day in compute. Number of users who will pay will be much less so I doubt they will have problems


They create the need by building limitations on the free tier.


in tech history, one would be screwed in such partnership. the question is whom?


Microsoft is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: