Your code also has a bug, which would affect the result in some variations of the problem with different parameters, but fortunately does not affect this specific version. You also used 100.
As a layman outsider, it doesn't seem like it. Anthropic is doing great work (I personally prefer Claude) and now there are so many quality LLMs coming out that I don't know if OpenAI is particularly special anymore. They had a lead at first, but it feels like many others are catching up.
Agreed. Sonnet 3.5 is still by far the most useful model I've found. o1-mini is priced similar and no where near as useful even if programming which it is suppose to excel. I recently tried o1-mini using `aider` and it would randomly start responding in russian mid way through despite all input being in english. If anything, I think Anthropic still has a decent lead when it comes to price to performance. Their update to Haiku and Opus will be very interesting.
Is it? There was some discussion on HN a while ago that it is better than gpt4o but nothing about the competition and that seems quite doubtful compared to e.g. alphaproof.
Also, if "significantly ahead" just means "a few months ahead" that does not justify the valuation.
The race is, can OpenAI innovate on product fast enough to get folks to switch their muscle memory workflows to something new?
It doesn't matter how good the model is, if folks aren't habituated to using it.
At the moment, my muscle memory to go to Claude, since it seems to do better at answering engineering questions.
The competition really is between FAANG and OpenAI, can OpenAI accumulate users faster than Apple, Google, Meta, etc layer in AI-based features onto their existing distribution surfaces.
Hard to say in my opinion. I can say that I still use OpenAI heavily compared to the competition. It really depends though. I do believe they are still leaders in offering compelling apis and solutions.
Well the original claim is super intelligence is going to be achieved by OpenAI. So I assume you have defined it and figured out a way to measure in the first place so that you know it has been achieved.
So probably on you to explain it since you came up with that claim.
We heard for years that Uber was the company that's most likely to be the first to develop self-driving cars. Until they weren't. You can't just trust what the CEOs are hyping.
Uber's autonomous division had more hype around it, and company's evaluation was largely based on the idea of replacing human drivers "very very soon". Now the bulk of their revenue comes from food delivery.
In fact they do, it’s called servers, GPUs, scale. You need them to train new models and to serve them. They also have speed and in AI speed is a non traditional moat. They got crazy connections too because of Sam. All of that together becomes a moat that someone just can’t do a “Facebook clone” on OpenAI
Someone certainly can "Facebook clone" OpenAI. Google, Meta and Apple all are more well capitalized than OpenAI, operate at a larger scale and are actively training and publishing their own models.
Money doesn't just give you hyperscaler datacenters or custom silicon competitive with Nvidia GPUs. Money and 5 years might, but as this shows, OpenAI only really has a 1.5 year runway at the moment, and you can't build a datacenter in that time, let alone perfect running them at scale, same with chip design.
I’m building several commercial projects with LLMs at the moment. 4o mini has been sufficient, and is also super cheap. I don’t need better reasoning at this point, I just need commodification, and so I’ll be using it for each product right up to the point that it gets cheaper to move up the hosting chain a little with Llama, at which point I won’t be giving any money to them.
They’ve built a great product, the price is good, but it’s entirely unclear to me that they’re continue to offer special sauce here compared to the competition.
Those moats are pretty weak. People use Apple Idioticnaming or MS Copilot or Google whatever, which transparently use some interchangeable model in the background. Compared to chatgpt these might not be as smart, but have much easier access to OS level context.
In other words: Good luck defending this moat against OS manufacturers with dominant market shares.
What you are overlooking is the fact that AI today and especially AI in the future is going to be about integrations. Assisted document writing, image generation for creative work, etc etc. Very few people will look at the tiny gray text saying "Powered by ChatGPT" or "Powered by Claude"; name recognition is not as relevant as eg iPhone.
Anecdotally, I used to pay for ChatGPT. Now I run a nice local UI with Llama 3. They lost revenue from me.
> Name any other ~~AI~~ company with better brand awareness and that argument could make a little bit of sense.
I just gave you three of them. Right now a large share of chatgpt customers come from the integration provided by those three.
> "Anyone could steal the market, anytime" and there's a trillion USD at play, yet no one has, why? Because that's a delusion.
Bullshit. It is not about "stealing" but about carving a significant niche. And that has happened: Apple In happens in large parts on device using not-chatgpt, google's circle to search, summaries etc use not-chatgpt, copilot uses not-chatgpt.
Nobody cares, though, really. My experience is that clients are only passingly interested in what LLM powers the projects they need and entirely interested in the deployed cost and how well the end product works.
Russian was a scientific power in the 19th Century before Soviet Union, and continued during the Soviet era. The west had limited access to it, due to the Cold War.
Soviet Union wasn't called a superpower for nothing. USSR had many world class achievements in scientific and applied areas, and some organizational achievements in social and manufacturing areas. There are examples and counterexamples, but the result is what we have, and while at some areas ex-Soviets were seen as backwards people in early 1990-s, in some others they really brought some positive advancements to the West - or First World - when the borders became open.
Case in point: The reason why the US heavily relied on Soviet rocket engines for their launches for ~15 years (before SpaceX dominance) was because they were simply more advanced and cost effective. Material science apparently was a step above - Soviet scientists were able to create an alloy for use in oxygen-rich engines which was unbelievable to Western counterparts till they visited and had it demonstrated.
This is one example, and there could be many - both where USSR had an edge and where it was behind. I believe here we want to have the overall picture - and that picture was that there actually were some novelties which were interesting on the West, even though in overall quality of life and some associated parameters USSR was notably losing. Or, saying it from another end, USSR wasn't advanced enough to avoid dissolution after - not necessarily caused by - the Cold war, even though it had some achievements unavailable on the West.
Yes, that was what I also wanted to point out here. As in, the set of novelties Soviets had over the West was at least non-zero. And that rocketry happened to be one may be surprising to some of those less informed about space technology.
This book and the culture it come from are so influential, that many people who did "enrichment" have already been exposed to many of the activities in the book. Most famous may be the Scratch JR / code.org introductory computer programming, but with pencil and paper.
It wasn't. If you told someone you dropped out of Harvard back then they would think you were making an odd choice. That said, it was never very risky since Harvard will take you back if you drop out, but it was at least unusual.
reply