Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI's plans according to sama (humanloop.com)
313 points by razcle on May 31, 2023 | hide | past | favorite | 258 comments



> He reiterated his belief in the importance of open source and said that OpenAI was considering open-sourcing GPT-3. Part of the reason they hadn’t open-sourced yet was that he was skeptical of how many individuals and companies would have the capability to host and serve large LLMs.

Am I reading this right? "We're not open sourcing GPT-3 because we don't think it would be useful to anyone else"


When you stop listening to what Sam Altman says and just focus on what he does, you can see the guy is a bit of a snake. Greedy power-hungry man imho.


I’ve tried very hard to like him because like it or not, ChatGPT has revolutionized the AI industry but he’s so hypocritical I just can’t stand him.


Pretty common for dislikeable people to be the most successful at an endeavor. It's not a coincidence.


I know it doesn't seem related on the surface, but I've found all startup CEOs who ban remote work to be snake-like and dishonest. Interesting coincidence at least.


Might it be that because they know that their own morals and ethics are so fluid that they therefore think than everyone else is the same and thus has to be surveilled to be sure to have "performance...? Just a thought....


So let me make an guess, you like remote work?


I can't judge the individual, but his words do not align with the company's actions in the slightest.

>> 4. OpenAI will avoid competing with their customers — other than with ChatGPT

On this I would not bet a dime.


He didn't even pinky promise.


In your world how would you consider Sam Altman having no equity in OpenAI? And everyone finding out after it had a viral hit



There is no way this guy hasn't figured out some way to get paid out of this.

We just haven't figured out how yet.


Or maybe he just doesn't need more money and wants to do something cool? If my basic needs were met, I'd go volunteer at any number of non-profits full time.


I mean, Occam's Razor says if he wanted a way to get paid, he would just have equity or a very large salary, the normal things CEOs do when they want to make lots of money from their very profitable company.


He gained a huge amount of reputation by saying "I don't have equity in the company. I'm doing it because I love it" in front of the Congress. While he was demanding licensing for AI because it's too dangerous.

I love it. It's dangerous. Hypocrisy? No. It's the tendency of man to pursue what is harmful to himself. Even though optimistic technologists of HN won't agree, much of technology has not benefitted humanity.


Wonder if it's indirectly through some stake in Microsoft or whoever deals with them at that level.


that just sounds ridiculous. Teams would have been over the legals for a $10bn investment. Not everyone is financially incentivised.

Also he's already half way to being a billionaire anyway without OpenAI.


[flagged]


Dude, he absolutely loves to talk about incentives.

Now that’s neither here nor there response to yours… but since he is so in tune with the concept of incentives, and likes to discuss them, it does make me curious what his incentives are that he is acting on.


Sam Altman is responsible for leading the team that have revolutionised AI in its position within society.

There is plenty to criticise OpenAI for but what he and they have achieved is extraordinary, and there is no need for that sort of toxic personal attack.


I can’t comment on his personality because I don’t know him. But it’s delusional to think the leadership’s personal qualities are irrelevant to developing AI. There’s going to be a lot of subjectivity in fine-tuning the answers.


It’s fine to feel the ends justify the means but not everyone believes that.


Success invites haters.


Sam is responsible for marketing the team which popularized a certain kind of AI product. No need for personal attack.


Most of the companies I work with are actively putting in place policies to prevent employees from using OpenAI's service because nobody wants to send their proprietary IP to them.

Almost all of these companies have the technical ability, desire, and means to self-host for their employee community. Imagine the internal coup for CTO/CIOs everywhere to buy whatever is the latest Nvidia GPU cluster box, stick it in the on-prem datacenter, load a licensed GPT model and provide "AI as a service to our employees".

Except what's happening is everybody is looking at buying the box from Nvidia, and sticking a large actually open model on it and simply ignoring OpenAI.


> Almost all of these companies have the technical ability, desire, and means to self-host for their employee community.

Well, one of companies I worked for could have hosted a canary service for cron jobs. But we bought it instead of building because we were focused on building features. And here you’re talking about hosting an entire LLM.


The cron job canary was probably not a service that employees were uploading tonnes of company confidential material into, was it? So I fail to see how the comparison makes sense.

The reason companies shun OpenAI and want a self hosted alternative isn't related to costs, it's becasue they don't want their code, internal emails, documentation etc to be uploaded to Microsoft and thus also directly to the NSA.


> The cron job canary was probably not a service that employees were uploading tonnes of company confidential material into, was it?

But they do to Slack and MS Teams. Also go mail services and other places.


OpenAI: Regulations must be passed to protect our moat

Also OpenAI: Meta is pissing in our moat, let's drop a hint about open sourcing our shit too!


I think I worded this poorly. What he said was that a lot of people say they want open-source models but they underestimate how hard it is to serve them well. So he wondered how much real benefit would come from open-sourcing them.

I think this is reasonable. Giving researchers access is great but for most small companies they're likely better off having a service provider manage inference for them rather than navigate the infra challenge.


The beauty of open source is that the community will either figure out how to make it easier, or collectively decide it’s not worth the effort. We saw this with stable diffusion, and we are seeing it with all the existing OSS LLMs.

“It’s too hard, trust us” doesn’t really make sense in that context. If it is indeed too hard for small orgs to self host then they won’t. Hiding behind the guise of protecting these people by not open sourcing it seems a bit disingenuous.


Here is how hard it is to serve and use LLMs: https://github.com/ggerganov/llama.cpp


“The original implementation of llama.cpp was hacked in an evening.”


You're saying the same thing.

"I'm not sharing my chocolate with you because you probably wouldn't like it"


If it goes same way as other open sourced models it takes about 5 days that someone will get it running at m1.


If he says he's inclined to open-source GPT-3, I don't see any good arguments not in favor of giving startups the choice of how they can run inference.


More like – it won't be useful to small-time developers (since they won't have the capability to host and run it themselves) and so all the benefits will be reaped by AWS and other large players.


This is what I understood as well. They want to either democratize adoption or not release it. The last thing they/anyone wants is for another BigCo or Govt to h take undue advantage of the model (through fine-tuning?) when others cannot.

That said, I can imagine a GPTQ/4-bit quantized model to be smaller and easier to run on somewhat commodity clusters?

Or it could run with GGML/llama.cpp on a cloud instance with a TB of RAM?

After seeing what people were able to do with LLaMA, I am positive that the community will find a way to run it - albeit with some loss in performance.

It would be truly amazing if they used their computing to develop quantized models as well.


A big chunk of developments based on Facebook’s LLaMA model are by small-time developers and individuals, not large players. Facebook has already shown a viable way to release models in the way you described.


If you really need, 170B parameter model can infer a few tokens per minute on commodity hardware.


It is weird, but GPT-3 is worse than much smaller LLaMA models so I doubt it would see much use anyway.


How do you measure this? Pointers to papers would be very helpful


The LLaMA paper had a bunch of comparisons


Aren't the LLaMA weights leaked though? Did Facebook ever open up its license?


Doesn’t matter if you only use it yourself. No one will know.


Are you referring to DaVinci or ChatGPT-3.5


DaVinci


It is a shame that Sama does not believe in Open Source. The community can solve their GPU bottleneck issue by making it run on CPUs and edge devices in a matter of days.


If small organizations and teams can’t use it, then open sourcing it mostly just benefits big tech

That’s not ideal

How does open source licensing work with respect to trained ai models anyway? Is something like the mit license even that valuable here? Or is it?


If the only barrier to a small team/org using it is cost/effort of hosting (as opposed to some licensing shenanigans), I fail to see how not releasing it is better for the world than releasing it would be. Even if it benefits big tech more than a small team.

Am I somehow being protected by a benevolent sama not open-sourcing the model?


I agree, this is so bizarre


yes i also can't wrap my head around how a ceo of a billion dollar company isn't sincere in his public statements


Really? Even after saying this? "While Sam is calling for regulation of future models, he didn’t think existing models were dangerous and thought it would be a big mistake to regulate or ban them."


Why couldn’t that be true? E.g. even scientists who worked on the Manhattan Project (justifiably) had antipathy toward the much more powerful hydrogen bomb.

It’s possible to think squirt guns shouldn’t be regulated but AR-15s should, or AR-15s shouldn’t but cruise missiles should. Or driving at 25mph should be allowed but driving 125mph shouldn’t.


It was a tongue in cheek reaction.


Its just a way to lie that doesn't sound as much like a lie.


lmao i had the same reaction. sounds like some bullshit.


Reads to me like "we don't know how many people will have hardware powerful enough to run this".


Exactly. If you make it open source, great, cool, but only well-funded entities - like massive corporations - can even afford the hardware costs.


Eh, better than nobody.


He wants the release of the model to primarily benefit individuals and smaller teams as opposed to large deep-pocketed firms.


And he'll do that by... keeping ChatGPT models away from individuals and small teams and in the hands of a few large deep-pocketed firms?

The great thing about open source is that people can try different approaches and gravitate towards what works best for them. Sam knows that of course, he's just being disingenuous because the truth makes him look bad.


How can you sign a statement that AI presents an extinction risk on par with nuclear weapons and then even consider open sourcing your research?

We don't provide nuclear weapons for everyone to keep in their basement, why would someone who believes AI is an existential risk provide their code?


> Cheaper and faster GPT-4 — This is their top priority. In general, OpenAI’s aim is to drive “the cost of intelligence” down as far as possible and so they will work hard to continue to reduce the cost of the APIs over time.

this certainly aligns with the massive (albeit subjective and anecdotal) degradation in quality i've experienced with ChatGPT GPT-4 over the past few weeks.

hopefully a superior (higher quality) alternative surfaces before its unusable. i'm not considering continuing my subscription at this rate.


Anthropic's Claude is said to be very good.

Instruction tuned LLaMA 65B/Falcon 40B are good, especially with an embeddings database.

...But OpenAI has all the name recognition and ease of use now, so it might not even matter if others ambiguously surpass OpenAI models.


The problem with Claude is that it is quite literally impossible to get off the waiting list to use it. To OpenAI’s credit they actually ship the product in an accessible way to developers.


Apart from poe.com there is also nat.dev. It even supports Claude-100K. Just pay $5 and it will bill by API pricing, proportional to number of tokens.


Poe.com. Takes 1 minute to sign up and then you can use it for 7 days for free. Pretty sweet deal. Not affiliated.


Imo the least interesting use of LLMs is stuff like Chatbots. API access is a prerequisite to do 99% of the interesting things that they can do.


I agree - it's not that I 'can't' access Claude, it's that they're not really shipping the API at the same scale that OAI is.


I just checked out poe.com. Seems you can only buy a subscription if you own Apple hardware (first time I've ever heard that).

It's $20 a month and comes with 300 GPT-4 messages and 1000 Claude 1.2 messages.

By comparison, ChatGPT Plus gives gives you up to 6000 GPT-4 messages a month for the same price (admittedly it would be hard to use that many as they are given in 3 hour blocks).


Can you ELI5 why an embeddings database helps here? Can pinecone/milvus be used to 'extend memory' of OSS and vendor LLMs without retraining?


First some context: llm "prompts" are actually the whole conversation + initial context. They learn nothing, hence the whole conversation gets fed into them every time, but the instruction following ones are trained to answer your most recent chat response.

In a nutshell, part of your llm prompt (usually your most recent question?) gets fed as a query for the embedding/vector database. It retrieves the most "similar" entries to your question (which is what an embedding database does), and that information is pasted into the context of the llm. Its kinda like pasting the first entry from a local Google search into the beginning of your question as "background."

Some implementations insert your old conversations (that are too big to fit into the llm's context window) into the database as they are pushed out.

This is what I have seen, anyway. Maybe some other implementations do things better.


> part of your llm prompt (usually your most recent question?) gets fed as a query for the embedding/vector database

How is it embedded? Using a separere embedding model, like Bert or something? Or do you use the LLM itself somehow? Also, how do you create content for the vector database keys themselves? Also just some arbitrary off the shelf embedding? Or do you train it as part of training the LLM?


Yeah its completely seperate. The LLM just gets some extra text in the prompt, that is all. The text you want to insert is "encoded" into the database which is not particularly compute expensive. You can read about one such implementation here: https://github.com/chroma-core/chroma


One thing I don't understand is how feeding the entire conversation back as a prefix for every prompt doesn't waste the entire 4K-token context almost immediately. I'd swear that a given ChatGPT window is stateful, somehow, just for that reason alone... but everything I've read suggests that it's not.


Have you tried something like Memory Transformers https://arxiv.org/abs/2006.11527 where you move the k/v pairs that don't fit in the context window to a vector db? Seems like a more general approach, but I have tested then against each other.


Any database can be used to extend the memory of LLMs. What a database does is store stuff and lets you search/retrieve stuff. Embeddings are differet form of data that are in many (but not all) cases superior to searching through text.

You do not need a fancy cloud hosted service to use an embeddings database like you do not need one to use a regular databse (although you could).

Check https://github.com/kagisearch/vectordb for a simple implementation of a vector search database that uses local, on-premise open source tools and lets you use an embeddings database in 3 lines of code.


I don't have access to GPT-4 but claude is competiting with gpt-3.5 (chatgpt) and bingAI (whatever they use)


Has anyone compared the ChatGPT GPT-4 performance in the plus subscription to that of the API? Has the API performance deteriorated just as much? It would be strange if it did as I'd assume the models costs are priced in there.


People state this as if it is fact when there is no good way to measure this.

I have had random runs of good days and bad days since starting to use chatGPT.


Exactly. It’s become ‘cheap’. This is why we need more good competition.


I wonder of it actually is because they’re tuning it to make it less offensive (by their standards). Thats the only explanation I keep seeing repeated.


I would be very surprised. Things that are very, very far from that are also much worse. I'm having difficulty finding the difference between GPT-3.5 and GPT-4 for a lot of my programming tasks lately. It's noticeably degraded.


That's a convenient explanation that's been repeated over and over by certain people, but cost is a much more likely explanation: inference for large models is extraordinarily expensive when you have millions of users and their pricing model always seemed way too low to pay for that.

They have likely been subsidizing their users since the launch of their commercial offering (and this is pretty common strategy for SV startups) but they've been so successful that they now need to scale the cost down in order not to burn all their cash too fast.


Should "intelligence" have ever costed anything?

It's like saying "air should cost money".


This just in: smart people should work for free.


I was not talking about human intelligence.


So what were you talking about? Non-human intelligence?


> is limited by GPU availability.

Which is all the more curious, considering OpenAI said this only in January:

> Azure will remain the exclusive cloud provider for all OpenAI workloads across our research, API and products [1]

So... OpenAI is severely GPU constrained, it is hampering their ability to execute, onboard customers to existing products and launch products. Yet they signed an agreement not to just go rent a bunch of GPU's from AWS???

Did someone screw up by not putting a clause in that contract saying "exclusive cloud provider, unless you cannot fulfil our requests"?

[1]: https://openai.com/blog/openai-and-microsoft-extend-partners...


There's an interesting recent video here from Microsoft discussing Azure. The format is a bit cheesy, but lots of interesting information nonetheless.

https://www.youtube.com/watch?v=Rk3nTUfRZmo&t=5s "What runs ChatGPT? Inside Microsoft's AI supercomputer"

The relevance here is that Azure appears to be very well designed to handle the hardware failures that will inevitably happen during a training run taking weeks or months and using many thousands of GPUs... There's a lot more involved than just renting a bunch of Amazon GPUs, and anyways the partnership between OpenAI and Microsoft appears quite strategic, and can handle some build-out delays, especially if they are not Microsoft's fault.


That is only relevant for serving and not for inference, unless the model is too big to fit on a single host (typically 8 GPUs).


One of Azure's unique offerings is very large HPC clusters with GPUs. You can deploy ~1,000 node scale sets with very high speed networking. AWS has many single-server GPU offerings, but nothing quite like what Azure has.

Don't assume Microsoft is bad at everything and that AWS is automatically superior at all product categories...


Whether MS is good or not isn't really the point. If they're constrained by GPU availability, being locked in to any specific provider is going to be a problem.


Large scale sets are only needed for training. For inference, 8x NVIDIA A100 80G will allow inference for 300b models (GPT-3 is 175b) or 1200b models with 4-bit quantization (quantization impact is negligible for large models), so a single machine is sufficient.


>So... OpenAI is severely GPU constrained, it is hampering their ability to execute, onboard customers to existing products and launch products. Yet they signed an agreement not to just go rent a bunch of GPU's from AWS???

> Did someone screw up by not putting a clause in that contract saying "exclusive cloud provider, unless you cannot fulfil our requests"?

Maybe MSFT refused to sign such an agreement?


Perhaps they are cash flow constrained, which in turn means they are GPU constrained, since GPU's are their biggest expense?


I don't think Amazon offers what Azure does (yet) in terms of HPC or multi-GPU capacity. The blog post doesn't say how long the agreement is for, but the relationship probably makes sense at the moment.

All the cloud providers are building out this type of capacity right now. It's already having a big impact in terms of quarterly spend, which we just saw in the NVDA Q1 results. AWS, Azure, and GCP for sure, but also smaller players like Dell and HPE and even NVidia themselves are trying to get into this market. (Disclaimer: I work at one of these places but don't feel like saying which). I suspect the GPU constraints won't be around too long, at which point we'll find out if OpenAI made a contractual mistake.


AWS might not really have much extra GPU capacity for them anyway.. also they would cost more.

I think that there aren't a lot of GPUs available and it takes time to add more to the datacenter even when you do get them.


I heard earlier this year that people were having trouble getting allocations on GCP as well. Probably why Nvidia is at $1T now.


Let’s not forget that Microsoft is a big investor in OpenAI. It is important to know on which side your bread is buttered.


Even if they weren’t exclusive with Azure, aren’t GPU prices reasonable again?


They have to be a available to buy, regardless the price. My understanding is there is a distinct lack of supply


Barring a revolution in chip manufacture, there likely will always be a lack of supply relative to consumer GPUs. The size of the die results in terrible yields.


this has nothing to do with sama clamoring for regulation.

that absolutely isn’t an attempt to slow down all competition.

which isn’t necessary because nobody made such a mistake.

this won’t lead to any hasty or reckless internal decisions in a feckless effort to stay in front.

not that any have already been made.

not that that could lead to disaster.


>The fact that scaling continues to work has significant implications for the timelines of AGI development. The scaling hypothesis is the idea that we may have most of the pieces in place needed to build AGI and that most of the remaining work will be taking existing methods and scaling them up to larger models and bigger datasets. If the era of scaling was over then we should probably expect AGI to be much further away. The fact the scaling laws continue to hold is strongly suggestive of shorter timelines.

If you understand the shape of the power law scaling curves, shouldn't this scaling hypothesis tell you that AGI is not close, at least via a path of simply scaling up GPT-4? For example, the GPT-4 paper reports a 67% pass-rate on the HumanEval benchmark. In Figure 2, they show a power-law improvement on a medium-difficulty subset as a function of total compute. How many powers of ten are we going to increase GPT-4 compute by just to be able to solve some relatively simple programming problems?


I always enjoy reading some of your comments, they ameliorate the hype about LLM and give a critical review. Anyway, I think a stronger model than GPT-4 could improve the way to use tools, so that the model is able to self-improve using tools. For example using all kind of solvers and heuristics to guide the model. I don't know how to estimate that risk just now.

Edited: Don't know if is a good thing to study the weak points of closed LLMs. Even asking LLMs can give hints about possible ways to improve. In my case I am happy I am certainly old and my mind is a lot weaker than before, but even in this case I prefer not to use LLMs for gaining insight because she will someday get a better insight than myself. But the lust of knowledge is a mortal sin.


Someone did that calculation and the result is here: https://www.reddit.com/r/slatestarcodex/comments/13u40yf/

100x GPT-4 to 85%.


And, if I'm reading their calculation right, that's 85% on the medium-difficulty bucket, not even the entire HumanEval benchmark?

(quoting from the GPT-4 paper):

>All but the 15 hardest HumanEval problems were split into 6 difficulty buckets based on the performance of smaller models. The results on the 3rd easiest bucket are shown in Figure 2


That does seem to support the idea that we're two or three major breakthroughs away from superintelligent AGI, assuming these scaling curves keep holding as they have.


I never know if I have an inside scoop or an outside scoop. Has Hyena not addressed the scaling of context length [1]? I know this version is barely a month old but it was shared to me by a non-engineer the week it came out. Still, giving interviews where the person takes away that the main limitation is context length and requires a big breakthrough that already happened makes me seriously question whether or not he is qualified to speak on behalf of OpenAI. Maybe he and OpenAI are far beyond this paper and know it does not work but surely it should be addressed?

[1] - https://arxiv.org/pdf/2302.10866.pdf


As someone who is in the field: papers proposing to solve the context length problem come out every month. Almost none of the solutions stick or work as well as a dense or mostly dense model.

You'll know when the problem is solved when model after consistently use a method. Until then (and especially if you're not in the field as a researcher), assume that every paper claiming to tackle context length is simply a nice proposal.


What about Meta’s megabyte? Also nice proposal?


Yes. Solving context length has been tried in hundreds of different approaches, and yet most LLMs are almost identical to the original one from 2017.

Just to name a few families of approaches: Sparse Attention, Hierachical Attention, Global-Local Attention,Sliding Window Attention, Locality sensitive hashing Attention, State space model, EMA gated attention.


I assume, there is a common point of failure?

Notably, human working memory isn't great either. Which begs the question (if the comparison is valid) as to whether that limitation might be fundamental.


The failure mode is that only long context tasks benefit, short ones work fast enough with full attention, and better. It's amazing that OpenAI never used them in any serious LLM even though training costs are huge.


> OpenAI will avoid competing with their customers — other than with ChatGPT. Quite a few developers said they were nervous about building with the OpenAI APIs when OpenAI might end up releasing products that are competitive to them. Sam said that OpenAI would not release more products beyond ChatGPT. He said there was a history of great platform companies having a killer app and that ChatGPT would allow them to make the APIs better by being customers of their own product. The vision for ChatGPT is to be a super smart assistant for work but there will be a lot of other GPT use-cases that OpenAI won’t touch.

Can anyone elaborate on this? This is a big issue for me.


Is this guy Aes Sedai?

Technically he can claim that OpenAI will not release competing products while Microsoft plugs AI into everything.

Microsoft just announced at Build 2023 that they'll have OpenAI tech integrated with: Windows, Bing, Outlook, Word, Teams, Visual Studio, Visual Studio Code, Microsoft Fabric, Dynamics, GitHub, Azure DevOps, and Logic Apps. I probably missed a bunch.

Very soon now, everything Microsoft sells will have OpenAI integration.

Unless you're selling a niche product too small for Microsoft to bother with, you're competing directly against OpenAI.

Oh, and to top it off: Microsoft can use GPT 4 all they want, via API access. Third parties have to beg and plead to get rate-limited access. That access can be withdrawn at any time if you're doing something unsafe to OpenAI's profit margins.

"Please Sir Sam, may I have some GPT please?"

"No."


> Is this guy Aes Sedai?

Haha having just finished the Wheel of Time, I'm super tickled by this reference.

It doesn't seem to be too common, only two uses of it on HN in the past year (at least, found by searching for the phrase "Aes Sedai")


It's the only such reference I could think of that others might recognise.

I'm reading the web serial Pact right now, where the main character can lie but it costs him dearly each time: https://pactwebserial.wordpress.com/

I also really enjoyed the ending of the Confederation Trilogy by Peter F Hamilton, which revolved around the inability of the Tyrathca race to tell lies.

PS: Just for laughs, I used to practice talking like an Aes Sedai, never telling an outright lie while actively deceiving people. It's an interesting skill to acquire and surprisingly easy. Once you learn how to do it, you'll never see a press conference or a political speech the same way ever again.


I think we have very similar tastes. I haven't read Pact yet, but I'm a huge fan of Worm (I consider it one of my top 5 favorite works of fiction).

I also haven't read Confederation, but I have read a different work for Hamilton (The Commonwealth Saga). I actually thought it was a bit too long, but it's one of the series I think about most often (so many great ideas and characters in it).


I think the tricky part for me is that "work" is extremely broad and now that ChatGPT has plugins, it can kind of do anything. Heh.


Title needs an update as Sama is also the name of the company which helped classify training data for ChatGPT: https://time.com/6247678/openai-chatgpt-kenya-workers/


I agree. Not using the actual name is also gate keeping for anyone not familiar enough. The fact that “sama” isn’t even capitalised adds to this.


Never mind the fact that it’s against HN guidelines to modify original titles for no reason. Changing Sam Altman to sama is just ridiculous.


His statements on open sourcing in this interview/write-up is somewhat in conflict with his recent statement made last week in Munich https://youtu.be/uaQZIK9gvNo?t=1170, where he explicitly said the Frontier of GPT won't be open sourced due to what they perceive as safety reasons, https://youtu.be/uaQZIK9gvNo?t=1170 (19:30 - 22:00).


He was talking about open sourcing GPT-3. That is not the frontier.

The frontier is the multimodal versions of GPT-4 which he just said wasn't even going to public release until next year. Or whatever they are on now which they are carefully not calling GPT-5.


I don't see the conflict. They see current models as mostly harmless, but what comes next is dangerous.

It sounds a little too much sci-fi for me, but I guess he knows better.


plus this conveniently pairs with "we don't need to regulate current models, but future models... oh boy do those need to be regulated!"


it's legal to make contradictory statements that's one of the job of a ceo and it's why they aren't usually overly literal types you know the kind i'm talking about


I’m hoping GPT will remove the information cutoff date. I write plenty of terraform/AWS and it’s a bit of a pain that the latest API isn’t accessible by GPT yet.

There’s been quite a bit happening in the programming space since sept 2021.

I use GPT to keep things high level and then do my normal research methodology for implementation details.


It's not like an arbitrary imposition, that's the data it was trained on and it's expensive to train. I hope they find a way to continually train in new information too but it's not like they can just remove the cutoff date.


Not disagreeing, but a fascinating thing they did (as a one-off fine-tune?) was teach ChatGPT about the openai python client library, including the features that were added after the cutoff date.


I enjoy using GPT4 as a co-programmer, and funny enough it is very challenging to get advice on Microsoft's own .NET MAUI because that framework was in prerelease at the time the model was trained.

My understanding is right now they essentially need to train a new model on a new updated corpus to fix this, but maybe some other techniques could be devised...or they'll train something more up to date.


You might actually get pretty far if you just went through the Microsoft docs and created a bunch of really concise examples and fed that as the start of the prompt. Use like 6-7kb for that and then the question at the end.


I have had some luck doing exactly that, and not even as efficiently as you describe - If my question is limited enough that the discussion won't overwhelm the context window I've found I can just paste in big chunks of the docs wholesale like a 'zero shot.'



Injecting the context yourself can help a lot. I frequently copy in a bunch of example code at the beginning of the conversation to help prime ChatGPT on APIs it knows nothing about.

For smaller projects that will fit, I've taken to: `xclip *` and then pasting the entire collection of files into ChatGPT before describing what I want to do.


Keep in mind that GPT-4 has a max context size of ~8000 tokens, if I recall correctly. That means that in any given ChatGPT session the bot only remembers roughly the last ~6k words, as a trailing window. It'll forget the stuff at the beginning fast.


As stated your request is entirely impossible. They cannot simply "remove the cut-off date". It takes months and huge amounts of hardware to train. Then they do the reinforcement adjustments on top of it while researching how to train the next batch.


Left the best part until the end. Scaling models larger is still paying off for openai. It’s not AGI yet, but how much bigger will a model need to get to max out?

>The scaling hypothesis is the idea that we may have most of the pieces in place needed to build AGI and that most of the remaining work will be taking existing methods and scaling them up to larger models and bigger datasets. If the era of scaling was over then we should probably expect AGI to be much further away. The fact the scaling laws continue to hold is strongly suggestive of shorter timelines.


After training my physics simulator on thousands of hours of video footage of trees moving in the wind, arborists tell me the trees are much more realistic (they are getting worried that I might put them out of business). But the physicists are still not satisfied. How many more videos do I need to generate the laws of motion?


Throw in the videos from the rest of the internet, and you might actually do it…


> The simulations incredible, but I have to ask, why do the trees all have breasts?


Why don't people ever explain what they mean by AGI? It means different things to different people.


'It's not AGI yet' - the implication is insufferable. It's a language model that is incapable of any kind of reasoning, the talk of 'AGI' is a glib utopianism, a very heavy kind of koolaid. If we were to have referred to this tech as anything other than 'intelligence' - for example, if we chose 'adaptive algorithms' or 'weighted node storage' etc. we'd likely have a completely different popular mental model for it.

There will be no 'AI model' that is 'AGI', rather, a large swath of different technologies and models, operating together, will give the appearance of 'AGI' via some kind of interface.

It will not appear as an 'automaton' (aka single processing unit) and it certain will not be an 'aha moment'.

In 10 years, you'll be able to ask various agents, of different kinds, which will use varying kinds of AI to interpret speech, to infer context, which will interface with various AI APIs, in many ways it'll resemble what we have today but with more nuance.

The net appearance will evolve over time to appear a bit like 'AGI' but there won't be an 'entity' to identify as 'it'.


> incapable of any kind of reasoning

If this were true the debate would be a hell of lot easier. Unfortunately, it is not.


In fact, comments like the one your are responding to are the most effective way to respond to ‘it hallucinates’.


There is no reasoning, which is why it will be impossible to move the LLM's past certain kinds of tasks.

They are 'next word prediction models' which elicit some kinds of reasoning embedded in our language, but it's a crude approximation at best.

The AGI metaphors are Ayahuasca Koolaid, like a magician duped by his own magic trick.

There will be no AGI, especially because there will be not 'automaton' aka distinct entity that elicits those behaviours.

Imagine if someone proposed 'Siri' were 'conscious' - well nobody would say that, because we know it's just a voice-based interface onto other things.

Well, Siri is about to appear much smarter thanks to LLMs, and be able to 'pass the bar exam' - but ultimately nothing has fundamentally changed.

Whereas each automaton in the human world had it's own distinct 'context' - the AI world will not have that at all. Context will be as fleeting as memory in RAM, and it will be across various systems that we use daily.

It's just tech, that's it.


> There is no reasoning

> elicit some kinds of reasoning

I know it's hard, but you have to choose here. Are they reasoning or are they not reasoning?

> next word prediction models

238478903 + 348934803809 = ?

Predict the next word. What process do you propose we use here? "Approximately" reason? That's one hell of a concept you conjured up there. Very interesting one. How does one "approximates" reason and what makes it so that the approximation will forever fail to arrive at its desired destination?

> Whereas each automaton in the human world had it's own distinct 'context' - the AI world will not have that at all. Context will be as fleeting as memory in RAM, and it will be across various systems that we use daily.

Human context is fleeting as well. Time, dementia and ultimately death can attest to that. Even in life identity is complicated and multifaceted without singular I. For all intents and purpose we too are composed of massive amounts of loosely linked subsystems vaguely resembling some sort of unity. I agree with you on that one. General intelligence IMO probably requires some form of cooperation between disparate systems.

But you see some sort of fundamental difference here between "biology" and "tech" that I just cannot. If RAM was implemented biologically, would it cease to be RAM? I fail to see what's so special about the biological substrate.

To be clear, I'm not saying LLMs are AGI, but I have a hard time dismissing the notion that some combination of systems - of which LLMs might be one - will result in something we just have to call generally intelligent. Biology just beat us to it, like it did with so many things.

> It's just tech, that's it.

The human version is: it's just biology, that's it. What's the purpose of stating that?


The bit about plugins not having PMF is interesting and possibly flawed. I, like many others, got access to plugins but not the browsing or code interpreter plugins which feel like the bedrock plugins that make the whole offering useful. I think there's also just education that has to happen to teach users how to effectively use other plugins, and the UX isn't really there to help new users figure out what to even do with plugins.


Have you found plugins to be useful?

For what it's worth I've found the model actually performs significantly worse at most tasks when given access to browsing, in part because it relies on that instead of its own in built knowledge.

I haven't found a good way to have it only access the web for specific parts of its response.


The only plugin I found useful was the diagramming one, forgot what it's called. But you can quickly make code (or other) flowcharts etc. And browsing in rare cases.


Can you link me? This would me useful.


Most of the plugins are garbage and for those that aren't, most seem like they would be better as a chat like experience in the original app than the OpenAI app


PMF meaning "product market fit"? I had to look it up, curious if I found the right thing or not.


Had the same reaction. I was just about Googling it when it hit. Funny how the brain can work out a random acronym given context.


Yes, PMF = "product market fit".


Grrrrr. I shouldn't have to play guessing games to read an article.


Yea, seems weird to allow people to use plugins, but not all of them. Then have the gall to say that no one is using plugins, yea because half of them don't have any context outside of America.


I tried the plugins - they honestly didn't seem to work very well. GPT-4 wasn't sure when it could use a plugin, or when it should talk about how it would do something. I wasn't able to get the plugins to activate most of the time.


If you look at their API limits, no serious company can use this to scale up beyond say 10k users. 3500 Reqs per min for gpt3.5 turbo. They have a long way to go to make it usable for the rest of the 95%


I've had to move to using Azure OpenAI service during business hours for the API-- much more stable unless the prompts stray into something a little odd and their API censorship blocks the calls.


I’ve been working directly with OpenAI’s access, are there any other advantages to doing this through Azure?


You can opt out of the safety filtering, btw.


Great content and great answers except for open source question. Sam is saying that he doesn’t think anyone would be able to run the code at scale so they didn’t bother? Seems like a nonsense answer, maybe I’m misunderstanding. The ability for individuals or businesses to effectively run and host the code shouldn’t have an impact on the ability to open source.


https://archive.ph/rcbem

The page now says "This content has been removed at the request of OpenAI." I wonder why they did it.


Related thread "OpenAI's plans according to Sam Altman removed at OpenAI's request": https://news.ycombinator.com/item?id=36177895


If they open source it, everyone would know that they used fuck ton of pirated content to train their models


As far as I'm aware training does not currently constitute "piracy".

It's fine to advocate for a redefinition but be explicit about it.


I think the point here is about the procurement of the training data, in violation of copyright laws ("piracy"), rather than that the training itself is piracy.

The suspicion[0] is that OpenAI trained their models on a large text dump including libgen (in the so-called "books2").

If a person downloads a book from Library Genesis, they're a pirate; if OpenAI does it, so are they.

[0] https://twitter.com/theshawwn/status/1320282152689336320


Really great news to give at cheaper and faster GPT4. As a GPT+ subscriber, the most annoying thing is the 25 message limit every 3 hours, I really want that removed.

A bit sad to hear that the multimodal model will only come next year, was hoping to get it this year

100k to 1 Million context length, sounds phenomenal especially if it comes to GPT4. I've used Claudes 100k context length and I found it so useful that when I have large documents I just default to Claude now


Did you have any tips on how you got access to Claude? I do the request access but never get any email or any contact.


I use Poe and got access to Claude 100k as soon as it was released. I think it's a better deal than paying OpenAI for sure, since you have access to GPT-4, Claude+, and others. They also have community bots, etc.


I'm a graduate student doing AI research in a US University. And I applied pretty early (Last year December I think) , those might be two factors that got me access to Claude.

I think getting access to Claude through slack is much easier and I recently got it by just downloading it as a Slack App


Claude has a free slack client that I briefly was able to access by creating a new slack workspace and adding it there. But as of yesterday it wasn't working for me


Poe, I’m in the same boat btw


https://archive.ph/uwaCp (original page is now a 404)


I love the tongue-in-cheek paradox myth that the Bitcoin whitepaper was written by a future god-AI to increase demand for GPUs (and thus boost supply) so we are able to assemble the future god-AI.


> I love the tongue-in-cheek paradox myth that the Bitcoin whitepaper was written by a future god-AI to increase demand for GPUs (and thus boost supply) so we are able to assemble the future god-AI.

I know it's a joke, but the hole is the god-AI couldn't have been that smart, since cryptocurrency-mining quickly switched to ASICs, which muting the demand increase for GPUs.


Well, humans switched from using using brains to store all their memories once they could dump data onto external media via writing. Much like how crypto switching to ASICs frees up GPU capacity for AGI, writing freed the brain to develop higher GI.


But not before ramping up development and production of GPUs.


> But not before ramping up development and production of GPUs.

Did the GPU manufactures ever embrace cryptocurrency? IIRC, they actually tried to discourage it (e.g. by butting throttling into mass market models to discourage their use for computation).

Also, the graphs here show a long-term downward trend, with only a short-term sales blip 5 years ago due to cryptocurrency: https://www.tomshardware.com/news/sales-of-desktop-graphics-....


Desktops is a very very key word there, I believe that’s why they repeat it so much. And it’s all tongue in cheek, and we all largely understand mining drove up gpu demand


Production MAYBE, I'm not sure what makes you think bitcoin would have ramped up the development of GPUs. What part of the last 10 years of GPU development looks bitcoin-focused to you? They're still very very focused on rendering and machine learning, not computing hashes.


I think there are some derivative coins that extended the viability of GPU mining, but I've been out of the game for a decade.


Nah, many of the technical people who knew Hal Finney personally were claiming that the Caltech alumnus did write the original Bitcoin whitepaper not any random guys on the Internet [1].

The talk/conversation appeared to me not as OpenAI future plan but more on the CEO lamenting on how severely limited the company by the GPU or the lack thereof. It just a cheeky ploy by a CEO of an AI company that currently at 30B USD valuation to get more money in order to buy several [fill in the blank] of these most advanced GPU systems [2].

[1]Nakamoto's Neighbor: My Hunt For Bitcoin's Creator Led To A Paralyzed Crypto Genius:

https://www.forbes.com/sites/andygreenberg/2014/03/25/satosh...

[2]Nvidia DGX GH200: 100 Terabyte GPU Memory System:

https://news.ycombinator.com/item?id=36133226


Fun to imagine a time machine being built but the only thing it can transmit backward in time is PDFs


I wouldn't install Adobe Acrobat even if it gave me access to break the laws of spacetime.


I'd read that book!


Hyperion/The Fall of Hyperion by Dan Simmons has something similar.


Watch Tenet.


I'm from the future, traveling backwards in time to tell you to not watch Tenet.


Also lots of similar wisdom, from Percival Dunwoody, Idiot Time Traveller from 1909 [0].

[0] https://www.gocomics.com/tomthedancingbug/2022/06/17


Somehow I'm super sensitive to the audio (or might be video) and start feeling nauseous after a short time. Is there an explanation to this? I think it's that scratchy humming background sound.


I don't know, except that I agree that it's horribly mixed. It's almost impossible to make out what people are saying in certain parts, and that causes cognitive load that degrades the experience. Also, the story is completely nonsensical. The only good thing about that movie is that time-reversed fight scene and even that was kind of questionable.


Most Bitcoin miners have been using ASIC chips that compute nothing but Bitcoin format SHA256 since long ago so it’s not increasing demand for GPUs. Ethereum is but it has already switched to PoS.


Then that god AI must have also pulled some strings for early video games


Conceptually this is paradoxical because of the notion that time is linear. Which it is to the best of our current understanding.


All AI companies (OpenAI included) are now working full tilt on making AIs improve themselves (writing their own code, inventing new pipelines etc). I don't know why choose anything else to work on. This is a prime directive, that will bring the greatest payoff.


I disagree since GPUs are a major constraint currently and that skilled specialists outperform GPT-4 almost always as long as they stay in their domain.

Will they use copilot(s) to improve the models? Yes, but they have been doing that since 2021 already (the release year of GitHub Copilot).


if this was currently possible wouldn't it lead to sentient/superhuman AI rapidly?

>tell AI to make itself more efficient by finding performance improvements in human written code

>that newly available processing power can now be used to find more ways to improve itself

>flywheel effect of AI improving itself as it gets smarter and smarter

eventually you'd turn it loose on improving the actual hardware it runs on. I think the question now is really how far transformers can be taken and if they are really the path to "real" AI.


Within a couple of years of improvement processes like you suggest will actually be really dangerous and stupid.

Also don't confuse all other types of human/animal characteristics like sentience with intelligence. They are different things. Things like sentience, subjective stream of experience, or other aspects of being alive don't just accidentally fall out of larger training datasets.

And we should be glad. The models are going to be orders of magnitude faster (and perhaps X times higher IQ) than humans within a few years. It is incredibly foolish to try to make something like that into a living creature (or emulation of living).


Intelligence is about action, and sentience is about qualia, which I equate to perceptions coloured by values. Action is visible and qualia are hidden, but they are closely interconnected: we choose our actions in accordance with our values and situation at hand.


> Things like sentience, subjective stream of experience, or other aspects of being alive don't just accidentally fall out of larger training datasets.

I disagree, language is all we need. Agency? Encode your “internal needs” as prompts, periodically generate prefixes from these, append them to incoming prompts. Self-awareness? Summarize this internal dialogue, reflect on it with a few iterations, add the results to the common prefix. Sentience? Attach some sensors, summarize their observations with the language model, prepend to prefix. Actions? Make the model output commands that some servos or other interfaces understand. Etc, etc.

And, of course, it would be extremely _cool_ to make something like that into a living creature, and lots of labs are already doing that. Fear and luddism should not stay in the way of curiosity.

If we humans cannot improve our own intelligence, making something smarter than us is an evolutionary imperative.


That's not what I meant. What you describe is deliberate engineering. Not accidentally falling out such as from just training on larger and larger datasets which some people think will result in digital consciousness or something through "emergence".

It is almost certain that the next stage of intelligence will be digital. But it is very foolish and unnecessary to try to speed that along.

It is likely that we have a century or two max left in control of the planet, regardless of what we do. On some level I agree that totally suppressing it indefinitely would be a shame.

When I said "living" I meant digital life. Such as those things you describe and others including control-seeking, self-preservation, and reproduction which are all central to living beings.

The problem is that AI will soon think 100 or more times faster than humans. This is anticipated based on the history of increases in computing efficiency and the fact that we are now optimizing a very specific system (LLMs). Humans will not in any way be able to keep up.

This is not luddism. I have a service that connects GPT-4 to Linux VMs on the internet to install or write software. I think this technology is great and has a lot of positive potential.

But when you deliberately try to emulate animals (like humans) and combine that with hyperspeed and other superintelligent characteristics, you are essentially approaching suicide or at least, abdicating all responsibility for your environment. There is no way to prevent such a thing from making all of your decisions for you.

The speed difference will be incredible. Imagine a bullet time scene where everyone seems to be moving in extreme slow motion. Now multiply by 10 so they are so slow they seem completely frozen.

This level of performance is coming in five years or less.

While I don't want to suppress the evolution of intelligent life in our corner of the universe, I also am not ready to join a death cult. Especially not accidentally.


I think they are at least 1-2 new big research breakthroughs(on the level of Attention) away from having this.


Well that is demonstrably untrue.


> A stateful API

This would be huge for many applications, as "chatting" with GPT-4 gets really, really expensive very quickly. I've played with API with friends, and winced as I watched my usage hit several dollars for just a bit of fun.


> Plugins “don’t have PMF”

Probability mass functions? Anyone know what this means in this context?


Product market fit


The roadmap here is completely focused on ChatGPT and GPT-4. I wonder what portion of their resources is still going to other areas (DALL-E, audio/ video processing, etc.)


Maybe some of those things that are currently separate projects will eventually converge with a multimodal model.


Off-topic note Humanloop might want to redesign their logo. It's been the Australian Broadcasting Corporation logo since 1963. Maybe pick a different Lissajous curve.


Nice to see they are working on reducing the pricing. GPT-4 is just too expensive right now imo. A long conversation would quickly end up costing tens of dollars if not more, so less expensive model costs + stateful API is urgently needed. I think even OpenAI will actually gain a lot by reducing the pricing, right now I wouldn't be surprised if many uses of GPT-4 weren't viable just because of the costs.


This is off by probably x10 or more.

Dozens of people using it daily for coding and conversations and review in a month might be a couple hundred bucks. All day convo, constantly, as fast as it can respond, might add up to $5.

Not sure what kind of convo you're having that you could hit $10 unless you're parallelizing with something like the "guidance" tool or langchain.


The version of GPT 4 with 32K token context length is the enabler for a huge range of "killer apps", but is even more expensive than the 8K version.

And yes, parallelism and loops are also key enablers for advanced use-cases.

For example, I have a lot of legacy code that needs uplifting. I'd love to be able to run different prompts over reams of code in parallel, iterating the prompts, etc...

The point of these things is that they're like humans you can clone at will.

The ability to point thousands of these things at a code base could be mindblowing.


Absolutely not. Dinner just got here, but tl;dr gpt4 is 0.03 per 750 words in 0.06 per 750 words out. People except the history to be included as well


Would someone please explain like I'm five which components of LLM's like ChatGPT are still closed source? What are the specific technologies that OpenAI is holding on to?

I know a lot of LLM stuff has either been released or leaked out, but don't have enough expertise in this area to understand the competitive advantages or breakthroughs OpenAI has obtained.


As far as I understand, it's mostly the weights. If you only have the models, you're still gonna need to get a massive amount of training material, huge costs for training it and fine tuning the hyper params until it works.


- they are working on a stateful API - they are working on a cheaper version of GPT-4

Most probably this is driven by their use of it in ChatGPT, which is on fire from PMF. Clearly they're experimenting with the cheaper GPT-4 in ChatGPT right now as it's fairly turbo now, as discussed earlier today.


Great writeup, this helps us understand where to spend our time vs what OpenAI's progress will solve.


If they open up fine-tuning API for their latest models, I wonder how the enthusiasm around the open source model is impacted. One of the advantages of the open source models is the ability to be fine-tuned. Are other benefits enough to keep the momentum going?


You better have deep pockets, have you check the prices and then the rates for using the tuned models? They sure 10x to 100x more expensive then nontuned models


I am looking forward to faster GPT-4, larger context windows and finetuning APIs. The combination of these can solve most of problems that I currently face with my LLM apps. It looks like a good roadmap for 2023.


This content has been removed at the request of OpenAI.

Just when I went back to the post for some quote material...


"This content has been removed at the request of OpenAI."


Surprised no mention of them developing their own chips.


7. find a sustainable business model and make some money


why should I believe what someone says their plans are


>Dedicated capacity offering is limited by GPU availability. OpenAI also offers dedicated capacity, which provides customers with a private copy of the model. To access this service, customers must be willing to commit to a $100k spend upfront.

How many shell corporations are intelligence agencies seeding right now?


Last night I was musing how many different countries' intelligence agencies have moles working at OpenAI currently. Gotta be at least 6, maybe as high as two dozen?


US, France, Israel ... then who? Maybe another five eyes country like the UK? Possibly China? I'm pretty skeptical Russia would be able to get someone in there but maybe.


Hi. French here. I may be wrong, but I really feel like you are overestimating us.


DGSE essentially puts all of it's money/effort into industrial espionage and they're the best in the world at it.


You said Isr*el twice.


I bet the NSA has dossier on every employee there as well


"Cooperate or we'll kill your family".

(Just to be clear, this is a hypothetical intelligence agent saying this, not me.)

I mean, it's not exactly rocket science, who wouldn't instantly fold to that?


Someone without family?


You know the next step, right?


"Cooperate or we will find you a spouse, make you fall in love, and have a family"


ChatGPT responds by threatening to torture a simulation of the agents’ consciousness in the cloud for eternity?

(I mean, since we’re just making up wild hypotheticals)


Agent Lee Chen Huwang, reporting for duty.


I had been putting theories in comments but they kept getting flagged or banned or downvoted to oblivion, but maybe its time has come. I'll keep it tame. If you are curious you can google connections of OpenAI board of directors, Will Hurd, In-Q-Tel trustees, Allen and Company, etc. There is more but whatever. The conspiracy theory is that 'the govt stepped in' during the six month pause after gpt-4 was trained and before it was released.


It probably keeps getting flagged because it’s ahistorical, source: OpenAI engineers, and #2 somewhat obviously so. You heard of RLHF?


> You heard of RLHF?

The conspiracy theory isn't that every employee of OpenAI spent 8 hours every day for six months in meetings with govt agencies.


not sure what you mean. anyways, the reason why they don’t release GPT4 when they’re “”“done””” training in June is they have to RLHF


Private instance means a dedicated endpoint fully managed by OpenAI. You do not get model access or anything a regular API user doesn't already get, except your API url will be something like customer123.openai.com/api instead of api.openai.com/api


They are not gonna give the weights for sure but it still will be inferencable, I’m not sure how but it’s be self destructive if they did


Exactly, with a private model you could easily extract the weights.


Why bother with shell corps when they already back companies in the clear: look at In-Q-Tel.


Having been recently taken aboard by the mothership I expect they'll start trying to tune out anything related to programming to push people towards co-pilot X..

It's pretty hilarious and annoying to see bing start to write code only to self censor itself after a few lines (deleting what was there! no wonder these guys love websockets and dynamic histories)

Whoops!


Wait… what? Can you elaborate.


He’s speculating that Microsoft is nerfing OpenAI / chatGPT to funnel narrow capabilities to silos like CoPilot.


I understand that... I should have specified a bit more that i'm interested in knowing more about the removal of answers as its writing them, if they're code.



yes... I know about this, but that's not what I'm asking about. I'm asking about it removing partial answers as it's writing them.

Please make more effort next time than to provide me with a Wiki article.


When i tried out bing 1-2 weeks ago and asked it for code it would start writing and after a few lines, realize it was writing code and stop and delete what it had written.

I tried it again tonight and it seems like they fixed it to only produce small amounts of mediocre code instead.


It's absurd that people are still thinking that a language model which a bunch of tokens are indexed is some kind of 'AGI'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: