Hacker News new | past | comments | ask | show | jobs | submit login
GPT-5 is behind schedule (wsj.com)
582 points by owenthejumper 28 days ago | hide | past | favorite | 1097 comments




I'm sure the debate over the definition of AGI is important and will continue for a while, but... I can't care about it anymore.

Between Perplexity searching and summarizing, Claude explaining, and qwen (and other tools) coding, I'm already as happy as can be with whatever you want to call this level of intelligence.

Just today I used a completely local AI research tool, based on Ollama. It worked great.

Maybe it won't get much better? Or maybe it'll take decades instead of years? Ok. I remember not having these tools. I never want to go back.


Same here.

The ability to “talk to an expert” about any topic I’m curious about and ask very specific questions has been invaluable to me.

It reminds me of being a kid and asking my grandpa a million questions, like how light bulbs worked, or what was inside his radio, or how do we have day and night.

And before anyone talks about accuracy or hallucinations, these conversations usually are treated as starting off points to then start googling specific terms, people, laws, treaties, etc to dig deeper and verify.

Last year during a visit to my first Indian reservation, I had a whole bunch of questions that nobody in person had answers to. And ChatGPT was invaluable in understanding concepts like where a reservation’s autonomy begins and ends. And why certain tribes are richer than others. What happens when someone calls 911 on a reservation. Or speeds. Or wants to start a factory without worrying about import/export rules. And what causes some tribes to lose their language faster than others. And 20 other questions like this.

And most of those resulted in google searches to verify the information. But I literally could never do this before.

Same this year when I’m visiting family in India. To learn about the politics, the major players, WHY they are considered major players (like the Chief Minister of Bengal or Uttar Pradesh or Maharashtra being major players because of their populations and economies). Criticisms, explanations of laws, etc etc.

For insanely curious people who often feel unsatisfied with the answers given by those around them, it’s the greatest thing ever.


> The ability to “talk to an expert” about any topic I’m curious about and ask very specific questions has been invaluable to me.

It is dangerous to assume that LLMs are experts on any topic. With or without quotes. You are getting a super fast journalist intern with a huge memory but inability to reason critically, lacking understanding about anything and huge unreliability when it comes to answering questions (you can get completely different answers to the same question depending on how you answer it and sometimes even the same question can get you different answers). LLMs are very useful and are a true game changer. But calling that expertise is a disservice to the true experts.


I actually find LLMs lacking true expertise to be a feature, not a bug. Most of the time I'm starting from a place of no knowledge on a topic that's novel to me, I ask some questions, it replies with summaries, keywords, names of things, basic concepts. I enter with the assumption that it's really no different than googling phrases and sifting through results (except I don't know what phrases I'm supposed to be googling in the first place), so the summaries help a lot. I then ask a lot of questions and ask for examples and explanations, some of which of course turn out to be wrong, but the more I push back, re-generate, re-question, etc (while using traditional search engines in another tab), the better responses I can get it to provide.

Come to think of it, it's really no different than walking into Home Depot and asking "the old guys" working in the aisles about stuff -- you can access some fantastic knowledge if you know the names of all the tools and techniques, and if not, can show them a picture or describe what you're trying to do and they'll at least point you in a starting direction with regards to names of tools needed, techniques to use, etc.

Just like I don't expect Home Depot hourly worker Grandpa Bob to be the end-all-be-all expert (for free, as well!), neither do I expect ChatGPT to be an all-knowing-all-encompassing oracle of knowledge.

It'll probably get you 95% of the way there though!


You forget that it makes stuff up and you won't know it until you google it. When googling, fake stuff stands out because truth is consistent.

Querying multiple llms at the same time and being able to compare results is a much better comparison to googling but no one does this.

As I said, you are talking to a super confident journalist intern who can give you answers but you won't know if it is true or partially true until you consult with a human source of knowledge.

It's not even similar to asking the old guys at the Home Depot because they can tell you if they are unsure they have a good answer for you. An LLM won't. Old guys won't hallucinate facts the way an LLM will

It is really is the 21st century Searle's epistemological Chinese room nightmare edition. Grammar checks out but whatever is spit out doesn't necessarily bear any resemblance to reality


LLMs train from online info. Online info is full of misinformation. So I would not trust an answer to be true just because it is given by multiple LLMs. That is actually a really good way to fall into the misinformation trap.


Most of OpenAI's training data is written by hired experts now. They also buy datasets of professional writing such as Time's archives.


My point was that googling gets you a variety of results from independent sources. So I said that querying multiple LLMs is as close as you can get for a similar experience.


I agree with everything you said, except I think we're both right at the same time.

Ol' boy at the Depot is constrained by his own experiences and knowledge, absolutely can hallucinate, oftentimes will insert wild, irrelevant opinions and stories while getting to the point, and frankly if you line 6 of them up side by side to answer the same question, you're probably leaving with 8 different answers.

There's never One True Solution (tm) for any query; there are 100 ways to plumb your way out of a problem, and you're asking a literal stranger who you assume will at least point you in the right direction (which is kind of preposterous to begin with)

I encourage people to treat LLMs the same way -- use it as a jumping off point, a tool for discovery that's no more definitive than if you're asking for directions at some backwoods gas station. Take the info you get, look deeper with other tools, work the problem, and you'll find a solution.

Don't accept anything they provide at face value. I'm sure we all remember at least a couple teachers growing up who were the literal authority figures in our lives at the time, fully accredited and presented to us as masters of their curriculum, who were completely human, oftentimes wrong, and totally full of shit. So goes the LLM.


TBF they're only truly useful when hooked up to RAG imo. I'm honestly surprised that we haven't yet built a digital seal of authenticity for truth that can be used by AI agents + RAG to conceivably give the most accurate answer possible.

Scientists should be writing papers sealed digitally once they're peer reviewed and considered "truth", same thing with journalist/news articles - sealed once confirmed true or backed up by a solid source in the same way we trust root certificates.

But then again, especially when it comes to journalism, cropping photos, chopping quotes, etc all to misrepresent etc. Turns out we're all the bad actors; it's in our DNA. And tbf, many people when presented with hard evidence to the contrary of the opinion that they cling onto like a babe to a breast, just plug their ears and cover their eyes.

Okay so maybe there's no point seeking truth/factual correctness, our species doesn't want it 99% of the time, unless it affects them directly (eg people that shoot down public healthcare until they have an expensive illness themselves).


People who are experts (PhD and 20 years of experience) often have very dumb opinions in their field of expertise. Experts make amateur mistakes too. Look at the books written by expert economists, expert psychologists, expert historians, expert philosophers, expert software engineers. Most books are not worth the paper they're written on, despite the authors being experts with decades of experience in their respective fields.

I think you overestimate the ability of a typical 'expert'. You can earn a PhD without the ability to reason critically. You can testify as an expert in a courtroom without understanding conditional probability. Lawyers and accountants in real life also totally contradict themselves when they get asked the same question twice but phrased slightly differently.


My personal criterion for calling somebody an expert, or "educated", or a "scholar" is that they have any random area of expertise where they really know their shit.

And as a consequence, they know where that area of expertise ends. And they know what half-knowing something feels like compared to really knowing something. And thus, they will preface and qualify their statements.

LLMs don't do any of that. I don't know if they could, I do know it would be inconvenient for the sales pitch around them. But the people that I call experts distinguish themselves not by being right with their predictions a lot, but rather by qualifying their statements with the degree of uncertainty that they have.

And no "expert system" does that.


> And as a consequence, they know where that area of expertise ends. And they know what half-knowing something feels like compared to really knowing something. And thus, they will preface and qualify their statements.

How do you count examples like Musk, then?

He is very cautious about rockets, and all the space science people I follow and hold in high regard, say he's actually a domain expert there. He regularly expectation-manages experimental SpaceX launches downward.

He's also very bold and brash about basically everything else; the majority of people I've seeing saying he's skilled in any other area have turned out to not themselves have any skills in those areas, while the people who do have expertise say he's talking nonsense at best and is taking wild safety risks at worst.


Musk is probably really good at back of the envelope calculations. The kind that lets you excel in first year physics. That skill puts you above a lot of people in finance and engineering when it comes to quickly assessing an idea. It is also a gimmick, but I respect it. My wild guess is that he uses that one skill to find out who to believe among the people he hires.

The rest of the genius persona is growing up with enough ego that he could become a good salesman, and also badly managed autism and also a badly managed drug habit.

Seeing him dabble in politics and social media shows instantly how little he understands the limits of his knowledge. A scholar he is not.


Anecdotal but I told chatgpt to include it's level of confidence in its answers and to let me know if it didn't know something. This priming resulted in it starting almost every answer with some variation of "I'm not sure, but.." when I asked it vague / speculative questions and then when I asked it direct matter of fact questions with easy answers it would answer with confidence.

That's not to say I think it is rationalizing it's own level of understanding, but that somewhere in the vector space it seems to have a Gradient for speculative language. If primed to include language about it, it could help cut down on some of the hallucination. No idea if this will effect the rate of false positives on the statements it does still answer confidently however


You'd have to find out the veracity of those leading phrases. I'm guessing that it just prefaces the answer with a randomly chosen statement of doubtfulness. The error bar behind every bit of knowledge would have to exist in the dataset.

(And in neural network terms, that error bar could be represented by the number of connections, by congruency of separate paths of arguing, by vividness of memories, etc ... it's not above human reasoning either, no need for new data structures ...)


The level of confidence with which people express themselves is a (neutral to me) style choice. I'm indifferent because when I don't know somebody I don't know whether to take their opinions seriously regardless of the level of confidence they project. Some people who really know their shit are brash and loud and other experts hedge and qualify everything they say. Outward humility isn't a reliable signal. Even indisputably brilliant people frequently don't know where their expertise ends. How often have we seen tech luminaries put a sophomoric understanding of politics on display on twitter or during podcast interviews? People don't end up with correctly calibrated uncertainty unless they put a ton of effort into it. It's a skill that doesn't develop by itself.


I agree, and a lot of that is cultural as well. But there is still a variety of confidence within the statements of a single person, hopefully a lot, and I calibrate to that.


AIs are a "master of all trades", so it is very unlikely they'll ever be able to admit they don't know something. What makes them very unreliable with topics where there is little available knowledge.


The fact that humans make mistakes has little to no bearing on their capacity to create monumental intellectual works. I recently finished The Power Broker by Robert Caro, and found a mistake in the acknowledgements where he mixed up two towns in New York. Does that invalidate his 500+ interviews and years of research? No.

Also, expert historians, philosophers psychs, etc. aren't judged based on their correctness, but on their breadth and depth of knowledge and their capacity to derive novel insights. Some of the best works of history I've read are both detailed and polemical, trying to argue for a new framework for understanding a historical epoch that shifts how we understand our modern world.

I don't know, I think I know very little about the world and there are people who know far more and I appreciate reading what they have to say, of course with a critical eye. It seems to me that disagreeing with that is just regurgitated anti-intellectualism, which is a coherent position, but it's good to be honest about it.


I don't disagree with what you say, but one difference is that we generally hold these people accountable and often shift liability to them when they are wrong (though not always, admittedly), which is not something I have ever seen done with any AI system.


This sounds like an argument in favor of AI personhood, not an argument against AI experts.


Right, but, then what? If you throw away all of the books from experts, what do you do, go out in your backyard and start running experiments to re-create all of science? Or start googling? What, some random person on the internet is going to be a better 'expert' than someone that wrote a book?

  Books might not be great, but they are at least some minimum bar to reach.  You had to do some study and analysis.
Seems like any critic of books, if you scratch the surface is just the whole anti-science/anti-education tropes again and again. What is the option? Don't like peer review science, fine, it has flaws, propose an option.


Many terrific books have been published in the past 500 years. The median book is not worth your time, however, and neither is the top 10%. You cannot possibly read everything so you have to be very selective or you will read only dreck. This is the opposite of being anti-science or anti-education.


But compared to the content on the internet?

So

Top 10% of Books. Ok

90 % of Books. marginal, lot of bad.

Internet. Just millions of pages of junk.

- Books still take some effort. So why not start there.

It isn't either/or, binary, a lot of books are bad, so guess I'll learn my medical degree from browsing the web because I don't trust those 'experts'.


The median book about medicine is over 100 years old, written in a language you don't speak, and filled to the brim with quackery. Worse than useless. Maybe you don't realize that bookstores and libraries only carry a minuscule fraction of all published works? You will get better information from reddit than from a book written before the discovery of penicillin.

I'll get you started with one of the great works from the 1600s:

https://www.gutenberg.org/cache/epub/49513/pg49513-images.ht...


You seem to have excluded the possibility of a "Top 10% of the Internet" tranche.


""Top 10% of the Internet""

What is the top 10% of the Internet that isn't part of some publishing arm of existing media? And, how can you tell? Some dudes blog about vaccines verses Harvard? Which do you believe.

Where are the self funded scientific studies that are occurring outside of academia? And thus not 'biased' by the 'elites'.

For internet only writing. There aren't a ton of "Astral Codex Ten"'s to draw upon as independent thinkers. And even then, he didn't sprout out of the ether fully formed, he has a degree, he was trained in academia.


> What is the top 10% of the Internet that isn't part of some publishing arm of existing media?

Why does that even matter?


?? You said "You seem to have excluded the possibility of a "Top 10% of the Internet" tranche. "

So you brought up the top 10% of the Internet, possibly as argument against books? That maybe there is valuable information on the Internet.

I was just saying, that 10% is also created by the same people that create books. So if you are arguing against books, then the top 10% of the Internet isn't some golden age of knowledge coming from some different more reliable source.


A Call to expertise is actually a fallacy. This is because experts can be wrong.

The scientific method relies on evidence and reproducible results, not authority alone.

Edited to add a reference: see under Appeal to authority. https://writingcenter.unc.edu/tips-and-tools/fallacies/


The fact is that in science, facts are only definitions and everything else is a theory which by definition is never 100% true.


> everything else is a theory which by definition is never 100% true.

Which definition of theory includes that it can never be 100% true? It can't be proven to be true, but surely it could be true without anyone knowing about it.


Frankly, I'm not sure what the point of the parent's comment is. Experts can be dumb and ChatGPT is dumb so it's an expert?

> People who are experts (PhD and 20 years of experience) often have very dumb opinions in their field of expertise.

The conventional wisdom is that experts are dumb OUTSIDE of their fields of expertise.

I don't know about you, but I would be very insulted by someone passing judgement like this on my own work in my field. I am sure that I would doubt their qualifications to even make the judgement.

Are there experienced fools? Sure. We both probably work with some. To me they are not experts, though.


> People who are experts (PhD and 20 years of experience) often have very dumb opinions in their field of expertise.

And the training data contains all those dumb opinions.


and rehashes it unthinkingly, without an idea of what it means to consider and disagree with it.


It's scary to think that we are moving into this direction: I can see how in the next few years politicians and judges will use LLMs as neutral experts.

And all in the hand of a few big tech corporations...


They aren't just in the hands of big corporations though.

The open source, local LLM community is absolutely buzzing right now.

Yes, the big companies are making the models, but enough of them are open weights that they can be fine tuned and run however you like.

I think LLMs genuinely do present an opportunity to be neutral experts, or at the least neutral third parties. If they're run in completely transparent ways, they may be preferable to humans in some circumstances.


The whole problem is that they are not neutral. They token-complete based on the corpus that was fed into them and the dimensions that were extracted out of those corpuses and the curve-fitting done to those dimensions. Being "completely transparent" means exposing _all_ of that, but that's too large for anyone to reasonably understand without becoming an expert in that particular model.

And then we're right back to "trusting expert human beings" again.


Nothing is truly neutral. Humans all have a different corpus too. We roughly know what data has gone in, and what the RL process looks like, and how the models handle a given ethical situation.

With good prompting, the SOTA models already act in ways I think most reasonable people would agree with, and that's without trying to build this specifically for that use case.


> Yes, the big companies are making the models, but enough of them are open weights that they can be fine tuned and run however you like.

And how long is that going to last? This is a well known playbook at this point, we'd be better off if we didn't fall for it yet again - it's comical at this point. Sooner or later they'll lock the ecosystem down, take all the free stuff away and demand to extract the market value out of the work they used to "graciously" provide for free to build an audience and market share.


How will they do this?

You can't take the free stuff away. It's on my hard drive.

They can stop releasing them, but local models aren't going anywhere.


They can't take the current open models away, but those will eventually (and I imagine, rather quickly) become obsolete for many areas of knowledge work that require relatively up to date information.


What are the hardware and software requirements for a self-hosted LLM that is akin to Claude?


Llama v3.3 70B after quantization runs reasonably well on a 24GB GPU (7900XTX or 4090) and 64GB of regular RAM. Software: https://github.com/ggerganov/llama.cpp .


The world was such a boring and dark place before everybody was constantly swiping on his smartphone in any situation, and before everysaid said basically got piped through a bigtech data center, where their algorithms control its way.

Now we finally have a tool where all of you can prove every day how strong/smart/funny/foo you are (not actually). How was life even possible without?

So, don't be so pessimistic. ;)


> I can see how in the next few years politicians and judges will use LLMs as neutral experts.

While also noting that "neutral" is not well-defined, I agree. They will be used as if they were.


Will they though?

We humans are very good at rejecting any information that doesn’t confirm our priors or support our political goals.

Like, if ChatGPT says (say) vaccines are good/bad, I expect the other side will simply attack and reject it as misinformation, conspiracy, and similar.


From what I can see, LLMs default to being sychophants; acting as if a sychophant was neutral is entirely compatible with the cognitive bias you describe.


Shrug

I treat LLM answers about the same way I treat wikipedia articles. If it's critical I get it right, I go to the wiki sources referenced. Recent models have gotten good at 'showing their sources', which is helpful.


> If it's critical I get it right, I go to the wiki sources referenced

the problem with this is that humans will likely use it for low key stuff, see that it works (or that the errors don't affect them too badly) and start using it for more serious stuff. It will all be good until someone uses it in something more serious and some time later it ends badly.

Human basic thinking is fairly primitive. If yesterday was sunny, the assumption is that today should too. The more this happens the higher your confidence. The problem is that this confidence emboldens people to gamble on that and when it is not sunny anymore, terrible things happen. A lot of hype driven behaviour is like that. Crypto was like that. The economic crisis of the late 00s was like that. And LLMs look set to be like that too.

It is going to take a big event involving big critical damage or a high profile series of deaths via misuse of an LLM to give policymakers and business leaders around the world a reality check and get them looking at LLMs in a more critical way. An AI autumn if you wish. It is going to happen at some point. Maybe not in 2025 or 2026 but it will definitely happen.

You may argue that it is the fault of the human using the LLM/crypto/giving out loans but it really doesn't matter when those decisions affect others.


But hasn’t it become quite easy to deal with this issue simply by asking for the sources of the information and then validating? I quite like using the consensus app and then asking for specific academic paper references which I can then quickly check. However this has taught me also that academic claims must also be validated…


If you need to validate the sources, you might as well go to the sources directly and bypass the LLM. The whole point of LLMs is not needing to go to the sources. The LLM consumes them for you. If you need to read and understand the sources yourself well enough to tell if the LLM is lying, the LLM is a wasteful middleman.

It's like buying supermarket food and also buying the same food from the farmers themselves.


It's dangerous to assume that the person you have access to is an expert either.


IMO it’s dangerous to call experts experts as well. Possibly more dangerous.


No. Expertise isn’t a synonym for ‘infallible’ it denotes someone whose lived experience, learned knowledge and skill means that you should listen to their opinion in their area of expertise - and defer to it, unless you have direct and evidence-based reasons for thinking they are wrong.


By that definition an expert would be <more> trustworthy. (Usually they want you to look at credentials instead.)

However that still ignores human nature to use that trust for personal gain.

Nothing about expertise makes someone a saint.


They have tried to address it with the help of o1 or o3 model at least to help it understand and reason better than before, but one of the quotes my manager says with regards to these is to trust it but verify it also.


“Believe in God, but tie up your camels”.


LLMs suffer from the "Igon Value Problem" https://rationalwiki.org/wiki/Igon_Value_Problem

Similar to reading a pop sci book, you're getting an entertainment from a thing with no actual understanding of the source material rather than an education.


Earlier in this thread, people mention the counterpoint to this: they Google the information from the LLM and do more reading. It's an excellent starting point for researching a topic: you can't trust everything it says, but if you don't know where to start, it will very likely get you to a good place to start researching.

Similarly, while you can't fully trust everything a journalist says, it's obviously better to have journalism than to have nothing: the "Ikon Value Problem" doesn't mean that journalism should be eradicated. Pre-LLMs, we really had nothing like LLMs in this way.


> they Google the information from the LLM and do more reading

The runway on this one seems to be running out fast - how long before all the google results are also non-expert opinions regurgitated by LLMs?


People are forgetting about the content farms like Associate Content [1]. Since the early aughts, these content farms would happily produce expert-sounding content on anything that people were searching for. They would buy top search terms from search engines like Yahoo, hire English majors for dirt cheap, and have them produce "expert" content targeting those search terms. At least the LLMs have been trained on some relevant data!

[1] https://en.wikipedia.org/wiki/Yahoo_Voices


So with AI Google has cut out the middleman and insourced the content farm.


The way I see it they have been like that for at last a decade. Of course before the transformers revolution these were generated in a more crude way, but still the end result is 99% of Google results for any topic have been trash for me since early 200x.

Google has given up on fighting the SEO crowd long time ago. I worry they give up on the entire idea of search and will just serve answers from their LLM.


You can turn to actual experts, e.g. YouTube or books. But yes, I have recently had the misfortune of working with a personal trainer who was using ChatGPT to come up with training programs, and it felt confusing and like I was wasting time and money.


When I'm looking for actual experts, the first thing that comes to my mind is definitely YouTube!!

And least when it's about YouTube specific topics, like where the like button and the subscribe button is.

They will tell me. Every. Single. F*cking. 5. Minute. Clip. Again. And. Again.

Not soooo much for anything actually important or interesting, though.... ;)

PS: Also which of the always same ~5 shady companies their sponsor is, of course.


Unironically, youtube is a great place to find actual experts on a given subject.


But he explicitly mentions books. That contrast makes it interesting. I assume that he is explicitly fine with text content.

And then he does not mention the web in general (or even Reddit - it wouldn't be worth more than an eyeroll to me), but YouTube.

On the one hand, yeah, well, the web was probably in a better shape in the past. (And YT even is a major aspect of that, imho, but anyways...) On the other hand, you really must be a die hard YT fanatic to only mention that single website (which by the way is mostly video clips, and has all the issues of the entire web), instead of just the web.

It's really well outside of the sphere of my imagination. The root cause of my reply wasn't even disagreement at first, but surprise and confusion.


You've made an error here...

>They will tell me. Every. Single. F*cking. 5. Minute. Clip. Again. And. Again.

Do you know why you got that video. Because people liked and subscribed to them and the 'experts' with the best information in the universe are hidden 5000 videos below with 10 views.

And this is 100% Googles fault for the algorithms they created that force these behaviors on anyone that wants to use their platform and have visibility.

Lastly, if you can't find anything interesting or important on YT, this points at a failure of your own. While there is an ocean of crap, there is more than enough amazing content out there.


Yeah, well, I never said that there aren't any experts in any topic who at some point decided to publish something there. The fact that entire generations of human beings basically look there and at TikTok and Instagram for any topic, probably also helps with this decision. It's still wildly bizarre to me anyways when people don't mention the web in general in such a context, but one particular commercial website, which is a lot about video based attention economy (and rather classic economy via so-called influencers). Nothing of that sounds ideal to me when it comes to learning about actually useful topics from actual experts. Not even the media type. It's hard for them to hyperlink between content, it's hard for me to search, to skip stuff, reread a passage or two, choose my own speed for each section, etc, etc. Sure, you can find it somewhere there. In the same spirit, McD is a salad bar, though... ;)

> And this is 100% Googles fault for the algorithms they created that force these behaviors on anyone that wants to use their platform and have visibility.

Wrong assumptions. It's not their fault, and a lot of it is probably by intent. It's just that they and you are not in the same boat. You are the product at big tech sites. It's 100% (impersonally) your fault to be sooo resistant understanding that. ;)


LLMs are pretty good at attacking the "you don't know what you don't know" problem on a given topic.


You just state this as if it was obviously true, but I don't see how. Why is using LLM like reading a pop sci book and not like reading a history book? Or even less like either, because you have to continually ask questions to get anything?


A history book is written by someone who knows the topic, and then reviewed by more people who also know the topic, and then it's out there where people can read it and criticize it if it's wrong about the topic.

A question asked to an AI is not reviewed by anyone, and it's ephemeral. The AI can answer "yes" today, and "no" tomorrow, so it's not possible to build a consensus on whether it answers specific questions correctly.


A pop sci fi book can be written by someone who knows the topic and reviewed by people who know the topic — and a history book can also not.

LLM generated answers are more comparable to ad-hoc human expert's answers and not to written books. But it's much simpler to statistically evaluate and correct them. That is how we can know that, on average, LLMs are improving and are outperforming human experts on an increasing number of tasks and topics.


In my experience LLM generated answers are more comparable to an ad-hoc answer by a human with no special expertise, moderate google skills, but good bullshitting skills spending a few minutes searching the web, reading what they find and synthesizing it, waiting long enough for the details to get kind of hazy, and then writing up an answer off the top of their head based on that, filling in any missing material by just making something up. They can do this significantly faster than a human undergraduate student might be able to, so if you need someone to do this task very quickly / prolifically this can be beneficial (e.g. this could be effective for generating banter for video game non-player characters, for astroturfing social media, or for cheating on student essays read by an overworked grader). It's not a good way to get expert answers about anything though.

More specifically: I've never gotten an answer from an LLM to a tricky or obscure question about a subject I already know anything about that seemed remotely competent. The answers to basic and obvious questions are sometimes okay, but also sometimes completely wrong (but confidently stated). When asked follow-up questions the LLM will repeatedly directly contradict itself with additional answers each as wrong as the first, all just as confidently stated.


More like "have already skimmed half of the entire Internet in the past", but yeah. That's exactly the mental model IMO one should have with LLMs.

Of course don't forget that "writing up an answer off the top of their head based on that, filling in any missing material by just making something up" is what everyone does all the time, and in particular it's what experts do in their areas of expertise. How often those snap answers and hasty extrapolations turn out correct is, literally, how you measure understanding.

EDIT:

There's some deep irony here, because with LLMs being "all system 1, no system 2", we're trying to give them the same crutches we use on the road to understanding, but have them move the opposite direction. Take "chain of thought" - saying "let's think step by step" and then explicitly going through your reasoning is not understanding - it's the direct opposite of it. Think of a student that solves a math problem step by step - they're not demonstrating understanding or mastery of the subject. On the contrary, they're just demonstrating they can emulate understanding by more mechanistic, procedural means.


Okay, but if you read written work by an expert (e.g. a book published by a reputable academic press or a journal article in a peer-reviewed journal), you get a result whose details were all checked out, and can be relied on to some extent. By looking up in the citation graph you can track down their sources, cross-check claims against other scholars', look up survey sources putting the work in context, think critically about each author's biases, etc., and it's possible to come to some kind of careful analysis of the work's credibility and assess the truth value of claims made. By doing careful search and study it's possible to get to some sense of the scholarly consensus about a topic and some idea of the level of controversy about various details or interpretations.

If instead you are reading the expert's blog post or hastily composed email or chatting with them on an airplane you get a different level of polish and care, but again you can use context to evaluate the source and claims made. Often the result is still "oh yeah this seems pretty insightful" but sometimes "wow, this person shouldn't be speculating outside of their area of expertise because they have no clue about this".

With LLM output, the appropriate assessment (at least in any that I have tried, which is far from exhaustive) is basically always "this is vaguely topical bullshit; you shouldn't trust this at all".


I am just curious about this. You said the word never, and I think your claim can be tested, perhaps you could post a list of five obscure questions for a LLM to answer and then someone could ask that to a good LLM for you, or an expert in that field, to assess the value of the answers.

Edited: I just submitted an ASK HN post about this.


> I've never gotten an answer from an LLM to a tricky or obscure question about a subject I already know anything about that seemed remotely competent.

Certainly not my experience with the current SOTA. Without being more specific, it's hard to discuss. Feel free to name something that can be looked at.


The same is true of Google, no?


> A question asked to an AI is not reviewed by anyone, and it's ephemeral. The AI can answer "yes" today, and "no" tomorrow, so it's not possible to build a consensus on whether it answers specific questions correctly.

It's even more so with humans! Most of our conversations are, and has always been, ephemeral and unverifiable (and there's plenty of people who want to undo the little of permanence and verifiability we still have on the Internet...). Along the dimension of permanence and verifiability, asking an LLM is actually much better than asking a human - there's always a log of the conversation you had with the AI produced and stored somewhere for at least a while (even if only until you clear your temp folder), and if you can get ahold of that log, you can not just verify the answers, you can actually debug the AI. You can rerun the conversation with different parameters, different prompting, perhaps even inspect the inference process itself. You can do that ten times, hundred times, a million times, and won't be asked to come to Hague and explain yourself. Now try that with a human :).


The context of my comment was what is the difference between an AI and a history book. Or going back to the top comment, between an AI and an expert.

If you want to compare AI with ephemeral unverifiable conversations with uninformed people, go ahead. But that doesn't make them sound very valuable. I believe they are more valuable than that for sure, but how much, I'm not sure.


when i tried studying, i got really frustrated because i had to search for so many things and not a lot of people would explain basic math things to me in a simple way.

LLMs do already a lot better job at this. A lot faster, accurate enough and easy to use.

I can now study something alone which i was not able to do before.


> accurate enough

Ask it something non-trivial about a subject you are an expert in and get back to me.


Accurate enough for it to explain to me details of 101, 201 and 301 university courses in math or physics.

Besides, when i ask it about things like SRE, Cloud etc. its a very good starting point.


Sadly I lack expertise. Do you have any concrete examples? How does, say the Wiki entry on the topic compare to your expert opinion.


Oh so you mean I have at my fingertips a tool that can generate me a Scientific American issue on any topic I fancy? That's still some non-negative utility right there :).


A Scientific American issue where the authors have no idea that they don’t know a topic so just completely make up the content, including the sources. At least magazine authors are reading the sources before misunderstanding the content (or asking the authors what the research means).

I don’t even trust the summaries after watching LLMs think we have meetings about my boss’s cat just because I mentioned it once as she sniffed the camera…


Its good to not trust it but that's not the same as it having no idea. There is a lot of value in being close for many tasks!


I think it’s a very dangerous place to be in an area you’re not familiar with. I can read Python code and figure out if it’s what I want or not. I couldn’t read an article about physics and tell you what’s accurate and what’s not.

Legal Eagle has a great video on how ChatGPT was used to present a legal argument, including made up case references! Stuff like this is why I’m wary to rely on it in areas outside of my expertise.


There’s a world of difference between blindly trusting an LLM and using it to generate clues for further research.

You wouldn’t write a legal argument based on what some random stranger told you, would you?


> Oh so you mean I have at my fingertips a tool that can generate me a Scientific American issue on any topic I fancy?

I’m responding to this comment, where I think it’s clear that an LLM can’t event achieve the goal the poster would like.

> You wouldn’t write a legal argument based on what some random stranger told you, would you?

I wouldn’t but a lawyer actually went to court with arguments literally written by a machine without verification.


> I’m responding to this comment, where I think it’s clear that an LLM can’t event achieve the goal the poster would like.

I know it can't - the one thing it's missing is the ability to generate coherent and correct (and not ugly) domain-specific illustrations and diagrams to accompany the text. But that's not a big deal, it just means I need to add some txt2img and img2img models, and perhaps some old-school computer vision and image processing algos. They're all there at my fingertips too, the hardest thing about this is finding the right ComfyUI blocks to use and wiring them correctly.

Nothing in the universe says an LLM has to do the whole job zero-shot, end-to-end, in a single interaction.

> I wouldn’t but a lawyer actually went to court with arguments literally written by a machine without verification.

And surely a doctor somewhere tried to heal someone with whatever was on the first WebMD page returned by Google. There are always going to be lazy lawyers doctors doing stupid things; laziness is natural for humans. It's not a valid argument against tools that aren't 100% reliable and idiot-proof; it's an argument for professional licensure.


Your entire argument seems to be “it’s fine if you’re knowledgeable about an area,” which may be true. However, this entire discussion is in response to a comment who is explicitly not knowledgeable in the area they want to read about.

All the examples you give require domain knowledge which is the opposite of what OP wants, so I’m not sure what your issue is with what I’m saying.


> Its good to not trust it but that's not the same as it having no idea. There is a lot of value in being close for many tasks!

The task is to replace hazelcast with infinispan in a stand-alone IMDG setup. You're interested in Locks and EntryProcessors.

Ghat GPT 4, o1 tell you with their enthusiastic style Infinispan has all those features.

You test it locally and it does....

But the thing is infinispan doesn't have explicit locks in client-server mode, just in embedded mode, but that's something you find out from another human who has tied doing the same thing.

Are you better off using Chat GPT in this case?

I could go on and on and on, on times Chat GPT has bullshitted me and wasted days of my time, but hey, it helps with one-liners and Copilot occasionally has spectacular method auto-complete and learns on the fly some stuff and it makes my cry when it remembers random tidbits about me that not even family members do


Given I have never heard of any of {hazelcast, infinispan, IMDG, EntryProcessors}, even that kind of wrong would probably be a improvement by virtue of reducing the time I spend working on the wrong answer.

But only "probably" — the very fact that I've not heard of those things means I don't know if there's a potential risk from trying to push this onto a test server.

You do have a test server, and aren't just testing locally, right? Whatever this is?


>. You do have a test server, and aren't just testing locally, right? Whatever this is?

Of course I didn't test in a client-server setup, that's why chat gpt manage to fool me, because I know all those terms, and that was not the only alternative I looked up. Before trying Infinispan I tried Apache Ignite and the api was the same for client-server and embedded mode; in hazelcast the api was the same for client-server and embedded mode, so I just presumed it would be the same for Infinispan AND I had Chat GPT re-assuring me.

The takeaway about Chat GPT for me is -- if there's plenty of examples/knowledge out there, it's ok to trust it, but if you're pushing the envelope, the knowledge is obscure, not many examples, DO NOT TRUST it.

DO NOT assume that just because the information is in the documentation, chat GPT has the knowledge or insight and you can cut corners by asking chat GPT.

And it's not even obscure information -- we've asked Chat GPT about the behavior of PostgreSql batchupserts/locking and it also failed to understand how that works.

Basically, I cannot trust it on anything that's hard -- my 20 years of experience have made me weary of certain topics and whenever those come up, I KNOW that I don't know, I KNOW that that particular topic is tricky, obscure, niche and my output is low confidence, and I need to slow down.

The more you use Chat GPT, the more likely it will screw you over in subtle ways; I remember being very surprised about how could so very subtle bugs arise EXACTLY in the pieces of code I deemed very unlikely to need tests.

I know our interns/younger folks use it for everything and I just hope there's got to be some ways to profit from people mindlessly using it.


> There is a lot of value in being close for many tasks!

horseshoes and hand-grenades?


Yes. Despite this apparently popular saying, "close enough" is sufficient in almost everything in life. Usually it's the best you can get anyway - and this is fine, because on most things, you can also iterate, and then the only thing that matters is that you keep getting closer (fast enough to converge in reasonable time, anyway).

Where "close" does not count, it suggests there's some artificial threshold at play. Some are unavoidable, some might be desirable to push through, but in general, life sucks when you surround yourself or enforce artificial hard cut-offs.


I notice that you've just framed most knowledge creation/discovery as a form of gradient descent.

Which it is, of course.


So they have reached human level intelligence :D


Yes! But now you get a specific pop sci book _in any subject you want to learn about_ and _you can ask the book about comparisons_ (e.g. how were Roman and Parthian legal systems similar?). This at leas gives you a bunch of keywords to go silly in wikipedia and publications (sci-hub! Cough! Sci-hub!)


(throwaway account because of what I'm about to say, but it needs to be said)

While my main use case for LLMs is coding just like most people here, there are lots of areas that are being ignored.

Did you know llama 3.X models have been trained as psychotherapists? It's been invaluable to dump and discuss feelings with it in ways I wouldn't trust any regular person. When real therapists also cost more than what people can afford (and will have you committed if you say the wrong thing), this ends up being a very good option.

And you know how escorts are traditionally known as therapists lite? Yeah, it works in reverse too. The main use case most are sleeping on is, well, emotional porn and erotic role play. Let me explain.

My generation (i.e. Z) doesn't do drugs, we don't drink, we don't go out. Why? Because we can hang on discord, play games, scroll tiktok and goon to our heart's content. 60% of gen Z men are single, 30% women. The loneliness epidemic hit hard along with covid. It's basically a match made in heaven for LLMs that can pretend to love you, like everything about you, ask you about your day, and of course, can sext on a superhuman level. When you're lonely enough, the fact that it's all just simulated doesn't matter one bit.

It's so interesting that the porn industry is usually on the forefront of innovation, adopting blueray and hddvd and whatnot before anyone else, but they're largely asleep on this and so is everyone else who doesn't want to touch of it with a 10ft pole. Well except maybe c.ai to some extent. The business case is there and it's a wide open market that OAI, Anthropic, Google and the rest won't ever stoop down to themselves, so the bar for entry is far lower.

Right now the best experience is known to be heading over to r/locallama by doing it yourself, but there's millions to be made for someone who improves it and figures out a platform to sell it on in the next few years. It can be done well enough with existing properly tuned, open weight, apache licensed LLMs and progress isn't stopping.


While I empathize with the therapeutic effects, wouldn't this create even more powerful echo chambers? May be so many men and women of your generation are single because of already established echo chambers.

It's in our nature to crave outside acceptance of who we are. But may be taken to extreme, when we stop being wanting to be challenged at all we could lose touch with reality, society...


I don't think anyone is saying that porn is healthy or something anyone should consume. Or smoking or whatever, but unhealthy enjoyable things are trillion dollar industries regardless.

The thing is though, LLMs do whatever you tune them to do. If you train them on a sycophantic corporate drone butler dataset, you get the average assistant model that's obviously a bad fit for this use case. If you train them on something else, you get whatever you want, even someone that challenges you. I wouldn't be surprised if having some sort of simulated soulmate partner thing that also does the job of an educator and life guide will be the norm in the future.


> 60% of gen Z men are single, 30% women

I always do a double take when I read such statistics. How can they possibly add up? Are gen Z men considered particularly undesirable leading to lots of relationships with large age gaps? Is there a ridiculously large overhang of gay women (over men)? Is there a huge number of men with multiple partners?

These gender disparities are difficult enough to believe when they come to sexual relations, it gets even harder when talking about relationships.

I guess what I'm saying is: I don't believe those numbers as stated and would be interested in an explanation or at least a source.


I think I recall that being somewhat disputed because the relationship status was self reported, some suggested that men might not consider certain types of relationships as serious but women do, so there's a disparity in reporting what is and isn't an actual relationship and the reality might be more balanced. Sweden statistics, xd.

From what I can find after a brief search, there's this one [0] that claims 63% for men, 34% for women, and [1] there's a a generally known toxicity around dating these days that makes these numbers entirely believable. I don't pretend to have a large enough network of acquaintances to make a good guess, but hardly anyone I know isn't single, and I know maybe two or three religious types that are actually married.

As for gen Z men being especially undesirable, there's well... [2].

[0] https://www.pewresearch.org/short-reads/2023/02/08/for-valen...

[1] https://old.reddit.com/r/GenZ/comments/1eo9bzj/interesting_b...

[2] https://www.ft.com/content/29fd9b5c-2f35-41bf-9d4c-994db4e12...


So are you saying, some gen Z men are in a relationship, but don't know it? I do buy that, it seems to be the basis of some rom-com plots. The clueless guy that doesn't know he's being reeled in.

Other factor.

As the other post suggested. There are large age gaps. Women date older, men date younger. This is also long known. Does it add up to 60/30? That does seem high, but maybe with every other factor thrown in, it explains it?


> So are you saying, some gen Z men are in a relationship, but don't know it?

Or, you know, are leading women on.


Both could be happening. Guess if we are assigning some guilt, then it would depend on self awareness?


I do find these reported numbers hard to believe.

I could certainly invent explanations for them. For example, I can say "no man would date until they've earned enough money to buy a house." This means younger males won't be dating but that doesn't appear to describe the world we live in.

I could say "Every man who dates is dating 2 women" but that also doesn't appear to describe the world we live in.


> It's so interesting that the porn industry is usually on the forefront of innovation, adopting blueray and hddvd and whatnot before anyone else, but they're largely asleep on this

Isn’t that the result of major credit card companies banning certain uses, thus pruning branches from the tree of possible futures ?

What we need is digital central bank money in some form, to get rid of that type of censorship.


I remember seeing an article discussed here on HN a while ago about OnlyFans creators using LLMs to automate the pretend personal relationship with paying fans.

Isn't that exactly what you suggest? A paid one-sided relationship that helps people feel better about themselves, with a bit of naughtiness mixed in.


Ah shit you're right, I forgot about that, yeah they are absolutely on it. I guess it makes more profit for people to believe that they're actually talking to a real person if they can't tell the difference anyway.


This seems to be part of a side plot in Blade Runner 2049.

The movie was about replicants of course, but in the background, the technology shown with the AI being a companion, it was a huge corporate hit, a big seller. In the background you see ad's for it, and they reference it as their most popular product. And, as you allude to, in the movie it was both for loneliness AND sexual. They interacted like a relationship with talking and hooking up.

I don't doubt that with current AI, something similar could be done. We're just missing the holograms.

And as you say, I'm sure the porn industry will catch on.

Kind of crazy how Porn isn't leading this tech wave like past ones. Maybe because people are scared of tracking?


What I liked about this in Blade Runner, was that if replicants are "people" (more the topic of the first movie), then it's not much of a stretch to consider software-AI as people, too. It would have been great if this question had been further explored in the 2md movie instead of just accepted.


I thought it is, with CSAM.


There are other players in this space than c. ai. One of the more interesting (and apparently less cynical) ones is Nomi. Tinkering with personalities on their platform can be quite fascinating.

It is possible, for example, to create a cunning and manipulative schemer that is entirely devoted to mentoring you with no romantic component whatsoever.


How do you know the answers are correct?

More than once I got eloquent answer that are completely wrong.


I give AI a “water cooler chat” level of veracity, which means it’s about as true as chatting with a coworker at a water cooler when that used to happen. Which is to say if I just need to file the information away as a “huh” it’s fine, but if I need to act on it or cite it, I need to do deeper research.


Yes, so often I see/hear people asking "But how can you trust it?!"

I'm asking it a question about social dynamics in the USSR, what's the worst thing that'll happen?! I'll get the wrong impression?

What are people using this for? are you building a nuclear reactor where every mistake is catastrophic?

Almost none of my interactions with LLMs "Matter", they are things I'm curious about, if 10 out of 100 things I learnt from it are false, then I learned 90 new things. And these are things which mostly I'd have no way to learn about otherwise (without spending significant money on books/classes etc.)


I try hard not to pollute my learning with falsehoods. Like I really hate spending time learning bs, not knowing is way better than knowing something wrong.


If you don't care if it's correct or not you can also just make the stuff up. No need to pay for AI to do it for you.


Yes, but how do you know which is which?


That is also a broader epistemological question one could ask about truth on the internet or even truth in general. You have to interrogate reality


That's certainly true, but I think it's also true that you have more contextual information about the trustworthiness of what you're reading when you pick up a book, magazine, or load a website.

As a simple example, LLMs will happily incorporate "facts" learned from marketing material into it's knowledgebase and then regurgitate it as part of a summary on the topic.


How do you address this problem with people? More than once a real live person has told me something that was wrong,


You can divide your approach to asking questions with people (and I do believe this is something people do):

1. You ask someone you can trust for facts and opinions on topics, but you keep in mind that the answer might only be right in 90% of the cases. Also people tend to tell you if the are not sure.

2. For answers you need to rely on you ask people who are legally or professionally responsible if they give you wrong advice: doctors, lawyers, car mechanics, the police etc.

ChatGPT can‘t lose it‘s job if it informs you incorrectly.


If ChatGPT keeps giving you wrong answers wouldn’t this make paying customers leave? Effectively “losing its job”. But I guess you could say it acts more like the person that makes stuff up at work if they don’t know, instead of saying they don’t know.


There was an article here just a few days ago, which discussed how firms can be ineffective, and still remain competitive.

https://danluu.com/nothing-works/

The idea that competition is effective, is often in spherical cow territory.

There’s tons of real world conditions which can easily let a firm be terrible at their core competency, and still survive.


> But I guess you could say it acts more like the person that makes stuff up at work if they don’t know, instead of saying they don’t know.

I have had language models tell me it doesn't know. Usually when using a RAG-based system like Perplexity, but they can say they don't know when prompted properly.


I've seen Perplexity misrepresent search results and also interpret them differently depending on whether GPT4o or Claude Sonnett 3.5 are being used.


I'm not sure about your local laws, but at least in Lithuania it's completely legal to give a wrong advice (by accident, of course)... Even a notary specialist would at most get to pay a larger insurance payment for a while, because human errors falls under professional insurance.


You are contradicting yourself. If the notary specialist needs insurance then there's a legal liability they are insuring against.

If you had written "notaries don't even get insurance because giving bad advice is not something you can be sued for" you would be consistent.


Experience. If I recognize they give unreliable answers on a specific topic I don’t question them anymore on that topic.

If they lie on purpose I don’t ask them anything anymore.

The real experts give reliable answers, LLMs don’t.

The same question can yield different results.


So LLMs are unreliable experts, okay. They're still useful if you understand their particular flavor of unreliability (basically, they're way too enthusiastic) - but more importantly, I bet you have exactly zero human experts on speed dial.

Most people don't even know any experts personally, much less have one they could call for help on demand. Meanwhile, the unreliable, occasionally tripping pseudo-experts named GPT-4 and Claude are equally unreliably-expert in every domain of interest known to humanity, and don't mind me shoving a random 100-pages long PDF in their face in the middle of the night - they'll still happily answer within seconds, and the whole session costs me fractions of a cent, so I can ask for a second, and third, and tenth opinion, and then a meta-opinion, and then compare&contrast with search results, and they don't mind that either.

There's lots to LLMs that more than compensates for their inherent unreliability.


> Most people don't even know any experts personally, much less have one they could call for help on demand.

Most people can read original sources.


Which sources? How do I know I can trust the sources that I found?


They can, but they usually don't, unless forced to.

(Incidentally, not that different from LLMs, once again.)


How do you even know what original sources to read?


There's something called bibliography at the end of every serious books.


I am recalling CGP Grey's descent into madness due to actually following such trails through historical archives: https://www.youtube.com/watch?v=qEV9qoup2mQ

Kurzgesagt had something along the same lines: https://www.youtube.com/watch?v=bgo7rm5Maqg


And yet here you are making an unsourced claim. Should I trust your assertion of “most”?


It's not that black and white. I know of no single person who is correct all the time. And if I would know such person, i still would not be sure, since he would outsmart me.

I trust some LLMs more than most people because their BS rate is much much lower than most people I know.

For my work, that is easy to verify. Just try out the code, try out the tool or read more about the scientific topic. Ask more questions around it if needed. In the end it all just works and that's an amazing accomplishment. There's no way back.


In my experience hesitating to answer questions because of the complexity of involved material is a strong indicator of genuine expertise linked with conscientiousness. Careless bullshitters like LLMs don't exhibit this behavior.


I can draw on my past experience of interacting with the person to assign a probability to their answer being correct. Every single person in the world does this in every single human interaction they partake in, usually subconsciously.

I can't do this with an LLM because it does not have identity and may make random mistakes.

LLMs also lack the ability to say "I don't know", which my fellow humans have.


It’s trivial to address this.

You ask an actual expert.

I don’t treat any water cooler conversation as accurate. It’s for fun and socializing.


Asking an expert is only trivial if you have access to an expert to ask!


And can judge which one is an expert and which one is bullshiting for the consultancy fee.


And as we've seen in last few years, large chunks of population do not trust experts.

Think this thread has gone from "how to Trust AI", to "how do we Trust Anything".


This is a true statement.

This is also not related to the problem being trivialized in the presented solution.

Lack of access to experts, doesn’t improve the quality of water cooler conversations.


Well if you’re a sensible person, you stop treating them as subject matter expert


and people just don't know what they don't know - they just answer sillyness the same way


All you have to do is just remember you’re asking your uncle bob, a man of extensive usually not too inaccurate knowledge.

There’s no reason a source has to be authoritative, just because it’s a computer.

It is a bit of an adjustment, though. We are used to our machines being accurate, or failing loudly.

But, looks like the future is opinionated machines.


so do teachers and books, in the future we need have multiple variants to cross check


Cross check against what? AI generated texts will flood the internet and burry the real knowledge just like SEO did before. But this time the fake knowledge will be less obvious and harder to check.


If that turns out to be true, the it looks like AI just gave universities a new reason for being.

What a shift from twenty years ago when optimism over “information superhighways” on the “world wide web” would end knowledge gatekeeping and educate the masses, to now— worries of AI slop and finely tuned ML algorithms frying older and younger generations’ brains, while information of human value gets buried, siloed, and paywalled, with no way to verify anything at all.


models from different vendors,plus google search. for serious stuff, we'll still have to check manually ourselves


You enable the search functionality.


There's something here that I feel is pretty deep, though offensive for some minds: What is the actual consequence of being wrong? Of not getting right the base reality of a situation?

Usually, stasis is the enemy that is much great than false information. If people with 90% truth can take a step forward in the world, even if they mistakenly think they have 100% truth, what does it matter? They're learning more and acting more for that step taken. If the mistaken ground truth is false and importantly enough false, they'll learn it bc their experience is grounded in the reality the navigate anyhow. If they don't learn it, it's of no consequence.

This is on my mind because I work in democratic reform, and I am acutely aware (from books like "Democracy for Realists", that eviscerate common assumptions about "how democracy works") that it often doesn't matter if we understand how democracy is working, so long as we feel like we do, enough to take steps forward and keep trying and learning. We literally don't even know how democracy works, and yet we've been living under it for centuries, to decent enough ends.

I think often about the research of Donald Hoffman. His lab runs evolutionary simulations, putting "creatures" that see "reality" (of the simulation) against creatures that see only "fitness" (the abstraction, but also the lie, that is more about seeing what gets the creature living to the next click of the engine, whether that's truth or falsehood about the reality). https://www.youtube.com/watch?v=oYp5XuGYqqY

Basically, creatures that see only fitness (that see only the lie), they drive to extinction every creature that insists on seeing "reality as it is".

I take this to mean truth is in no way, shape, or form favoured in the universe. This is just a convinient lie we tell ourselves, to motivate our current cultural work and preferences.

So tl;dr -- better to move forward and feel high agency with imperfect information, than to wait for a full truthful solution that might never come, or might be such high cost as to arrive too late. Those moving forward rapidly with imperfect information will perhaps drive to extinction those methods that insist on full grounding in reality.

Maybe this is always the way the world has worked... I mean, does any mammal before us have any idea how any of reality worked? No, they just used their senses to detect the gist of reality (often heuristics and lies), and operated in the world as such. Maybe the human sphere of language and thought will settle on similar ruthlessness.


Incorrect information by itself is at best useless. Incorrect information that is thought to be correct is outright dangerous. Objective truth is crucial to science and progress.

We've come too far since the age of enlightenment to just give it all up.


The hundred year functioning of democracy begs to differ. It literally works nothing like how anyone tells themselves it does, not just laypeople, but arguably even political scientists. It's quite possible that no echelon of society has had the correct story so far, and yet... (again, see "Democracy for Realists")

Also, the vision heuristics that brains use to help us monitor motion as another obvious example. They lie. They work. They won.

https://x.com/foone/status/1014267515696922624?s=46

> Objective truth is crucial to science

Agreed. We define science and science is truth about base reality.

> Objective truth is crucial to [...] progress.

More contentious imho. Depends if progress is some abstract human ideal that we pursue, or simply "survival". If it's the former, maybe objective truth is required. If it's the latter, I find the simulation evidence to be that over-adherence to objective truth (at least information-theoretically) is in fact detrimental to our survival.


> “My father once told me that respect for truth comes close to being the basis for all morality. 'Something cannot emerge from nothing,' he said. This is profound thinking if you understand how unstable 'the truth' can be.”

Frank Herbert, Dune


Yes! There’s no ‘element’ of truth. Funnily enough, this isn’t a philosophical question for me either.

The industrialization of content generation, misinformation, and inauthentic behavior are very problematic.

I’ve hit on an analogy that’s proving very resilient at framing the crossroads we seem to be at - namely the move to fiat money from the gold standard.

The gold standard is easy to understand, and fiat money honestly seems like madness.

This is really similar to what we seem to be doing with genAI, as it vastly outstrips humanity’s capacity to verify.

There’s a few studies out there that show that people have different modes of content consumption. A large chunk of content consumption is for casual purposes, and without any desire to get mired into questions of accuracy. About 10% of the time (some small %, I don’t remember the exact) people care about the content being accurate.


The ability to "talk to an expert" on any topic would indeed have been very useful. Sadly, we have the ability to talk to something which tries very very hard to appear as an expert despite knowing nothing about the subject. A human who knows some things pretty well but will talk about stuff they don't know with the same certainty and authority as they walk about stuff they know is a worthless conversation partner. In my experience,"AI" is that but significantly worse.


Semi-related but I find that sometime it just completely ruined a type of conversation.

Like as in your example, I would previously asked people "how would 911 handle an US Reservation Area", and watch how my friends think and reason. To me getting a conclusive answer was not a point. Now they just copy & paste Chat GPT, no fun haha.


That's just the 2020s version of how Google and smartphones ruined the ages-old social pastime of arguing about trivia in a pub :P


Yeah it can definitely be a crutch too in some situations. I notice it with my kids where they’ll want to tell me about something but then seek a video or something to show it.

Sometimes I have to say “no! just use your words to describe it! I want to hear your description”


I think it's good of you to make them critically engage with the subject by verbalizing it themselves. Evidence suggests that video consumption is relatively un-engaging mentally, likely as it demands nothing of you.


For me the problem is that you always need to double-check this particular type of expert, as it can be confidently wrong about pretty much any topic.

It's useful as a starting point, not as a definitive expert answer.


What human experts do you blindly trust without double checking?


Most human experts, when asked about their area of expertise, don't parrot what some guy said as joke on Reddit five years ago.

Most lawyers, when you ask them to write a brief, will cite only real cases.


I coined the term "fancy cruise control" on reddit, as a joke, to describe Autopilot. One of the mods of the self-driving car sub thought the term was so funny he made a joke subreddit for it. A few years later Tesla lawyers invoked the term in court to downplay the capabilities of autopilot in court.


"Most" is the key word here. In my experience that's also the case for LLMs.


LLM proponents really have succeeded in moving the overton window on this discussion. "Sure, you cannot trust LLMs, but you cannot trust humans, either".


I don’t think “Overton window” works in that construction. It typically refers to the range of politically acceptable opinions.

LLMs are too new to have such a thing. It sounds like you’re an “LLM opponent” (whatever that means) who believes the appropriate standard is infallibility? I don’t even get that line of thinking, but you’re welcome to it. But let’s not pretend this is a decades-long topic with a social consensus that people try to influence.


I didn't mean overton window in a political sense (not a English native speaker). It's more about moving the goal post maybe.

> I don’t even get that line of thinking, but you’re welcome to it

I would not say "LLM oponent". Rather "LLM critic". I'm not against LLMs as a technology. I'm worried about how the technology is deployed and used, and what the consequences are. Specifically, copyright issues, power use issues, inherent biases in the traning data that strengthen existing discrimation against minorities, raciscm and sexism. I'm not convinced by the hype created by LLM proponents (mostly investors and other companies and people who financially benefit from LLMs). I'm not saying that machine learning doesn't bring any value or does not have use cases. I'm talking more about the recent AI/LLM hype.


Most of them. Are you constantly doing validation studies for every piece of information you take in? If the independent experts tell me that a new car is safe to drive, then I trust them.


> The ability to “talk to an expert” about any topic I’m curious about and ask very specific questions has been invaluable to me.

Even the ability to talk to a university work placement student/intern in any topic is very useful, never mind true experts.

Even Google's indexing and Wikipedia opened up a huge quantity of low-hanging fruit for knowledge sharing; Even to the extent that LLMs must be treated with caution because the default mode is over-confident, and even to the extent one can call them a "blurry JPEG of the internet", LLMs likewise make available a lot of low-hanging fruit before we get to an AI that reasons more like we do from limited examples.


Libraries and books were pretty cool too though. You could go to a library and find information on anything and a librarian would help you. Not super efficient but good for humans.


Talk to an expert? You are aware of them hallucinating right?


I've been "talking" quite a bit with Ollama models, they're often confidently wrong about Wikipedia level stuff and even if the system prompt is explicitly constrained in this regard. Usually I get Wikipedia as understood by a twelve year old with the self-confidence of adult Peter Thiel. If it isn't factually wrong, it's often subtly wrong in the way that a cursory glance at some web search results is unlikely to rectify.

It takes more time for me to verify the stuff they output than grabbing a book off Anna's Archive or my payed collections and looking something up immediately. I'd rather spend that time making notes than waiting for the LLM to respond and double checking it.


> For insanely curious people who often feel unsatisfied with the answers given by those around them, it’s the greatest thing ever.

As an insanely curious person who's often unsatisfied with the answers given by those around me, I can't agree. The greatest thing ever is libraries. I don't want to outsource my thinking to a computer any more than I want to outsource it to the people around me.


In the not so distant past we already had a tool that allowed us to look up any question that came into our minds.

It was super fast and always provided you with sources. It never hallucinated. It was completely free except for some advertisement. You could build a whole career out of being good at using it.

It was a search engine. Young people might not remember but there was a time when Google wasn't shite but actually magic.


> we already had a tool that allowed us to look up any question that came into our minds … It never hallucinated. … It was a search engine.

Except for all the times the search results were wrong answers.

https://searchengineland.com/when-google-gets-it-wrong-direc...


Being biased is not the same as hallucinating. LLMs have both problems.

At least you could check whether a source was reputable and where the bias was. With LLM's the connection between the answer and the source is completely lost. You can't even tell why it answered a certain way.


> Being biased is not the same as hallucinating. LLMs have both problems.

I didn't deny either of those things, I said that search engines also hallucinate — my actual link gave several examples, including "King of the United States" -> "Barack Obama".

Just because it showed the link to breitbart doesn't mean it was not hallucinating.

> At least you could check whether a source was reputable and where the bias was.

The former does not imply the latter. You could tell where a search engine got an answer from, but not which answers were hidden — an argument that I saw some on the American right make to criticise Google for failing to show their version of events.

> With LLM's the connection between the answer and the source is completely lost. You can't even tell why it answered a certain way.

Also not so. The free version of ChatGPT supports search directly, so it allows you to have references.


> I said that search engines also hallucinate — my actual link gave several examples

They don't. Google added a weird widget that do hallucinate. But the result list is still accurate, even though it may be biased towards certain sources.

> You could tell where a search engine got an answer from, but not which answers were hidden

A bit pedantic, but a search engine returns a list of results according to the query you posted. There's no question-answer oracle. If you type "King of the United States", you will get pages that have the terms listed. Maybe there will be semantic manipulations like "King -> Head of state -> President", but generally it's on you to post the correct keywords.


One of my favorite successes was getting an LLM to write me a program to graph how I subjectively feel the heat of steam coming off of the noodles I'm pouring the water out from as a function of the ambient temperature.

I was wondering which effects were at play and the graph matched my subjective experience well.


I mostly feel sorry for grandpa, he'll receive much less of these questions, if any. This is partially because I expect to become this grandpa and already suspect that some people aren't asking me questions they would be, if they had no access to chatgpt.


> And most of those resulted in google searches to verify the information. But I literally could never do this before.

Could you elaborate on this? What happened before when you had that type of questions? What was stopping you from tamping "911 emergency indian reservation" into google and learning that the "Prairie Band Potawatomi Nation" has their own 911 dispatch?

In my youth, before the internet was everywhere, we were taught that we could always ask the nearest librarian and that they would help us find some useful information. The information was all there, in books, the challenge was to know which books to read. As I got older, and Google started to become more available, we were taught how to filter out bad information. The challenge shifted from finding information into how not to find misinformation.

When I hear what you say here, I'm reminded of that shift. There doesn't seem to be any fundamental change there, expect may that it makes it harder not to find misinformation by obscuring the source of the information, which I was taught was an important indicator of its legitimacy.


The change is that when I am immersed in that scenario (on holiday and without any normal life distractions so I can truly learn about this topic), then my mind is the most curious about that topic.

The alternative is that when I return from vacation, I get back to the busy life and am only reminded of these questions in casual conversations.


How do you know the AI didn't hallucinate the answers? For topics like these, where there is little information available, the probability of hallucination is very high.


The amount of value creation is off the scale. It's like when people started using Google, or Google maps.


At this point I think even the most bearish have to concede that LLM's are an amazing tool. But OpenAI was never supposed to be about creating tools. They're supposed to create something that can completely take over entire projects for you, not just something that can help you work on the projects faster. If they can't pull that off in the next year or two, they're gonna seriously struggle to raise the next 10B they'll need to keep the lights on.

Of course LLMs aren't going anywhere, but I do not envy Sam Altman right now.


At this point it’s quite likely that they could pivot and just be the chatgpt company. I’ve found chatgpt-4o with web search and plugins to be more useful than o1 for most tasks.

It’s possible we’re nearing the end of the LLM race, but I doubt that’s the end of the AI story this decade, or OpenAI.


Ya I think they probably will, but "the chatgpt company" is not worth 157B. It might not even be worth 1B.


Id be hard pressed to come up with a valuation under 30B based on the publicly known finances. OpenAI is certainly crushing the metrics of other highly valued startups like snowflake and databricks.

The cash burn and claim of imminent agi is where the valuation trouble could be.


We’ve barely seen the first wave of companies being built of their APIs too. The billions being put in thousands of startups will take around 5yrs to hit full scale.


It has replaced ~50% of my Google searches.


Yes but it also hasn't been attacked by ads yet. Google doesn't suck for lack of search results, it sucks because of ads.

Imagine asking chatgpt to tell you about slopes in Colorado, and the first five answers are about how awesome North Face is and how you can order from them. You probably wouldn't use it as much.


Local models are GOOD as well, and easy to use (ollama + open web ui). OpenAI has to perform a huge trick in order to stay relevant.


Does ChatGPT need ads, I feel as though people are willing to pay for the service much more than people are willing to pay for a Google search.


Yes but remember we’re in a tech bubble and the average person still doesn’t know what ChatGPT is.


Not worth 1B ? Come on man. I see them improving the tool enough for most people willing to pay 50$ a month for a subscription. And for most companies to be willing to pay 300$ per employee. It's perhaps not there yet but I'm sure they'll reach this amount of value for their offering. It remains to be seen what competition will do to the prices though.


The market of people willing to pay $50 a month for OAI vs $0/month for one of the open source LLAMA variants is not large enough to justify their current valuation, imo


I'm not that familiar with the open source ones - how good are they in comparison?


It doesn't really matter how much people are willing to pay. It matters how much margin the market will allow you to charge. OpenAI may be a bit better than most competitors most of the time (IMO they keep getting leap-frogged by Anthropic et al. though), but if your customers can get 90% of the value for 50% less, they will bail. There is no moat. Margins will be razor thin. That's not a 1B+ company.


I think the difference between 90% and 95% is huge. As a coder, if the LLM is wrong 10% of the time that's pretty bad, I can't really trust it. If it's wrong 5% of the time, still not great but much better - I'd pay much more for that kind of reliability improvement.


Depends on the competition of course. They need an edge to stop me going to the bald guy and running it there.


I keep thinking about that Idris Elba Microsoft ad about AI about how much AI can help my business, and how both true and untrue that as is, and how much distance there is between the now and the possible promised of AI, and I imagine this is what keeps Altman up at night.


Tesla is still valued high, despite FSD did not came, despite being promised. So OpenAI would get away with delivering ChatGPT5, if it is better than the competition.


Tesla is profitable and they have a big technological moat. OpenAI is in a very competitive industry and they burn ~5B a year.


I believe the car industry is somewhat competive as well and they needed allmost 10 years to become profitable.


Sure, but if you want to compete with Tesla, you need many billions in funding and 10+ years to catch up. If you want to compete with OpenAI, you need maybe half a billion (easy to raise in the current climate; many have done so) and mb a few months to catch up.


I've bee thinking the same thing lately. Even if we don't get to AGI, LLMs have revolutionized the way I work. I can produce code and copy at superhuman speeds now. I love it. Honestly, if we never get to AGI and just have the LLMs, it's probably the best possible outcome as I don't think true AGI is going to be a good thing for humanity.


Thats all fine, but I think you are missing the bigger picture. It's not about whether what we already got out of this is good. Of course it is. But this is about where it's going.

Until about 120 years ago, people were happy with horses and horse carriages. Such a great help! Travel long distances, pull weights, I never want to go back! But then the automobile was invented and within a few years little travel was done by horses anymore.

More recently, everybody had a landline phone at home. Such great tech! Talk to grandma hundreds of miles away! I never want to go back! Then suddenly the mobile phone and just shortly after the smart phone came along and now nobody has a landline anymore but everybody can record tiktoks anywhere anytime and share them with the world within seconds.

Now imagine "AI". Sure, we have some new tools right now. Sure we don't want to go back. But imagine the transformative effects that could come if the train didn't stop here. Question is just: will it?


Amen. Everyone is talking about plateaus and diminishing returns on training but I don’t care one bit. I get that this is a startup focused forum and the financial sustainability of the market players is important but I can’t wait to see what the next decade of UX improvements will be like even if model improvements slow to a crawl.


As a consumer you should always evaluate the product that is in front of you, not the one they promise in 6 months. If what's there is valuable to you, then that's great.

When we discuss the potential AGI we're not talking as consumers, we're talking about the business side. If AGI is not reached, you'll see an absolutely enormous market correction, as it realizes that the product is not going to replace any human workers.

The current generation of products are not profitable. They're investments towards that AGI dream. If that dream doesn't happen, then the current generation of stuff will disappear too, as it becomes impossible to provide at a cost you'd be comfortable with.


Human workers workers have already been replaced.


This is me. If things never improve and Sonnet 3.6 is the best we have…I’m fine. Its good enough to drastically improve productivity


completely local AI research tool, based on Ollama

Could you elaborate? Was it easy to install?



Not OP, but yeah, ollama is super easy to install.

I just installed the Docker version and created a little wrapper script which starts and stops the container. Installing different models is trivial.

I think I already had CUDA set up, not sure if that made a difference. But it's quick and easy. Set it up, fuck around for an hour or so while you get things working, then you've got your own local LLM you can spin up whenever you want.


Does ollama still execute whatever arbitrary python code is in the model?


vscode + cline extension + gemini2.0 is pretty awesome. Highly recommend checking out cline. it quickly became one of my favorite coding tools.


Gemini 2.0 isn't particularly great at coding. The Gemini 1206 preview that was released just before 2.0 is quite good, though. Still, it hasn't taken the crown from Claude 3.5 Sonnet (which appears to now be tied with o1). Very much agree about Cline + VSCode, BTW. My preferred models with Cline are 3.5 Sonnet and 3.5 Haiku. I can throw the more complex problems at Sonnet and use Haiku for everything else.

https://aider.chat/docs/leaderboards/edit.html


In the wake of the o1 release, and with the old aider benchmark saturating, Paul from aider has created a new, much harder benchmark. o1 dominates by a substantial margin.

https://aider.chat/docs/leaderboards/ https://aider.chat/2024/12/21/polyglot.html


the context limits on google are nuts! Being able to pump 2 million tokens in and having it cost $0 is pretty crazy rn. Cline makes it seamless to switch between APIs and isnt trying to shoehorn their SAAS AI into a custom vscode (looking at you cursor)


>the context limits on google are nuts! Being able to pump 2 million tokens in and having it cost $0 is pretty crazy rn.

What's the catch though? I was looking at Gemini recently and it seemed too good to be true.


Your code becomes training data[0]:

> When you use Unpaid Services, including, for example, Google AI Studio and the unpaid quota on Gemini API, Google uses the content you submit to the Services and any generated responses to provide, improve, and develop Google products and services and machine learning technologies, including Google's enterprise features, products, and services, consistent with our Privacy Policy.

[0] https://ai.google.dev/gemini-api/terms


Google inference is a lot cheaper since they have their own hardware so they don't have to pay licensing to NVIDIA, thus their free tier can give you much more than others.

Other than that the catch is like all other free tiers, it is marketing and can be withdrawn at any moment to get you to pay after you are used to their product.


I will check it out. The number of new tools is staggering.

I enjoy image and video generation and I have a 4090 and ComfyUI; I can't keep up with everything coming out anymore.


If you're interested in the latest tools for coding, join this subreddit and you'll always be on top of it:

https://www.reddit.com/r/ChatGPTCoding/

There are a lot of tools, but only a small pool of tools that are worth checking out. Cline, Continue, Windsurf, CoPilot, Cursor, and Aider are the ones that come to mind.


"ChatGPT" Coding... is it impartial? the name sorta sounds biased.


ChatGPT was the first to come along, so the subreddit was given a perhaps short-sighted name. It's now about coding with LLMs in general.


If you're a offline kind of guy, try LM Studio + Cline :)

/not affiliated with cline, just a happy user


Curious about the AI research tool you mentioned, would you mind sharing it? Been trying to get a good local research setup with Ollama but still figuring out what works best.



Not OP, but based on their mention of Ollama, I can tell you that it has built in search tools, all you need to do is supply an API to one of the tools, or even run one of the search tools locally using docker.


I have the opposite reaction.

AI right now feels like that MBA person at work.

They don’t know anything.

But because they sound like they are speaking with authority & confidence, allows them to get promoted at work.

(While all of the experts at work roll their eyes because they know the MBA/AI is just spitting out nonsense & wish the company never had any MBA/AI people)


And the MBA person (at my company this is everyone in middle management) is also the person who go around and suggest we shoehorn AI into everything...


I'm pretty sure the plan has never been to just make these tools that make us more efficient. If AI stays at the level it's at, it would be a profound failure for companies like OpenAI. We're all benefiting from the capital being poured into these technologies now. The enshittification will come. The enshittification always comes.


I’m paying $240 a year to Anthropic that I wasn’t paying before and it’s worth it. While I don’t use Claude every single day, but I use it several times a day when I’m working. More times than the free tier allows.


Why do people say this like it's a refutation? Current valuation and investments were not based on getting a very small group of nerds (affectionately) on HN to pay $250/yr which probably doesn't cover even inference costs for the models let alone training and R&D


> Just today I used a completely local AI research tool, based on Ollama. It worked great.

Is it on github?


Can you walk me through the steps you've taken to set up the Ollama-based tool so far?


Cline was fixing my type errors and unit tests while I was doing my V60 pourover.


If the progress in capabilities stall, the product fit, adoption, ease of use are the next battlefield.

OpenAI may be first to realize and switch, so they still have a chance to recoup some of those billions


i feel like AGI is an arbitrary line in the sand anyway

i think as humans we put too much emphasis on what intelligence means relative to ourselves, instead of relative to nature


> Just today I used a completely local AI research tool, based on Ollama. It worked great

What’s it called? Could you post a link please?

Thank you



At this point, most conceivable beneficial use cases for LLMs have been covered. If the economics of AI tech were aligned with making a good product that people want and/or need, we'd basically take everything we have at this point and make it lighter, smaller, and faster. I doubt that's what will happen.


The definition of agi is a linguistic problem but people confuse it for a philosophical problem. Think about it. The term is basically just a classification and what features and qualities fit the classification is an arbitrary and linguistic choice.

The debate stems from a delusion and failure to realize that people are simply picking and choosing different fringe features on what qualifies as agi. Additionally the term exists in a fuzzy state inside our minds as well. It’s not that the concept is profound. It’s that some of the features that define the classification of the term we aren’t sure about. But this doesn’t matter because we are basically just unsure about the definition of a term that we completely made up arbitrarily.

For example the definition of consciousness seems like a profound debate but it’s not. The word consciousness is a human invention and the definition is vague because we choose the definition to be ill defined, vague and controversial.

Much of the debate on this stuff is purely as I stated just a language issue.


If it's genuinely what you say, then how is what is going on not slavery?

I don't believe AGI is possible but if it was and it was as subjective as you say what is and isn't conscious, then it starts to take on an even more altogether evil character.

Akin to cloning slave humans or something for free cheap labor.


how does a linguistic and language issue relate to slavery. It's the definition of a word. That's all.

Slavery is also a word. Don’t you find it strange that your entire moral framework is constructed on top of the basis of arbitrary definitions of vocabulary? Make what you think is right or wrong based not off of language. Language is a delusion that masquerades as something with actual meaning when it is just an invention, a tool, to facilitate communication.

Right now your concept of right and wrong is a vocabulary issue. Does this make sense? No.


Local search with Ollama? Please share!


They’re garbage, they will always be garbage. Changing a 4 to a 5 will not make it not garbage.

The whole sector is a hype bubble artificially inflating stock prices.



If that’s supposed to be impressive, it really isn’t.


What was the AI search tool?


how do you interact with perplexity? mobile app?


Let's revisit this comment in one year – after the explosion of agentic systems. (:


You mean, the explosion of human centipede LLM prompts shitting into eachother?

Yes that will be a sight to behold.


We already have agentic systems; they're not particularly impressive (1).

There's no specific reason to expect them to get better.

Things that will shift the status quo are: MCST-LLMs (like with ARC-AGI) and Much Bigger LLMs (like GPT-5, if they ever turn up) or some completely novel architecture.

[1] - It's provable; if just chaining LLMs are a particular size into agentic systems could scale indefinitely, then you could use a 1-param LLM and get AGI. You can't. QED. Chaining LLMs with agentic systems has a capped maximum level of function which we basically already see with the current LLMs.

ie. Adding 'agentic' to your system has a finite, probably already reached, upper bound of value.


> It's provable; if just chaining LLMs are a particular size into agentic systems could scale indefinitely, then you could use a 1-param LLM and get AGI. You can't. QED.

Perhaps I missunderstand your reply, but that has not been my experience at all.

There are 3 types of "agentic" behaviour that has worked for a while for me, and I don't know how else it would work without "agents":

1. Task decomposition - this was my manual flow since pre-chatgpt models: a) provide an overview of topic x with chapter names; b) expand on chapter 1 ... n ; c) make a summary of each chapter; d) make an introduction based on the summaries. I now have an "agent" that does that w/ minimal scripting and no "libraries". Just pure python control loop.

This gets me pretty reasonable documents for my daily needs.

2. tool use (search, db queries, API hits). I don't know how you'd use an LLM without this functionality. And chaining them into flows absolutely works.

3. coding. I use the following "flow" -> input a paragraph or 2 about what I want, send that + some embedding-based context from the codebase to an LLM (3.5 or 4o, recently o1 or gemini) -> get code -> run code -> /terminal if error -> paste results -> re-iterate if needed. This flow really works today, especially with 3.5. In my testing it needs somewhere under 3 "iterations" to "get" what's needed in more than 80% of the cases. I intervene in the rest of 20%.


A zed user? Live that editor and the dev flow with it.


Haha, yes! I'm trying it out and been loving it so far. I found that I go there for most of my eda scripts these days. I do a lot of datasets collection and exploration, and it's amazing that I can now type one paragraph and get pretty much what it would have taken me ~30 min to code myself. Claude 3.5 is great for most exploration tasks, and the flow of "this doesn't work /terminal" + claude using prints to debug is really starting to come together.

I use zed for this, cursor for my more involved sessions and aider + vscode + continue for local stuff when I want to see how far along local models have come. Haven't tried cline yet, but heard great stuff.


I didn’t say they don’t work, I said there is an upper bound on the function they provide.

If a discrete system can be composed of multiple LLMs the upper bound on the function they provide is by the function of the LLM, not the number of agents.

Ie. We have agentic systems.

Saying “wait till you see those agentic systems!” is like saying “wait til you see those c++ programs!”

Yes. I see them. Mmm. Ok. I don’t think I’m going to be surprised by seeing them doing exactly the same things in a year.

The impressive part in a year will the non agentic part of things.

Ie. Explicitly; if the underlying LLMs dont get any better, there is no reason to expect the system built out of them to get any better.

If that was untrue, you would expect to be able to build agentic systems out of much smaller LLMs, but that overwhelmingly doesn’t work.


> if the underlying LLMs dont get any better, there is no reason to expect the system built out of them to get any better.

Actually o1, o3 are doing exactly this, and very well. I.e. explicitly: by proper orchestration the same LLM can do much better job. There is a price, but...

> you would expect to be able to build agentic systems out of much smaller LLMs

Good point, it should be possible to do it on a high-end pc or even embedded.


> but that overwhelmingly doesn’t work.

MCTS will be the next big “thing”; not agents.


They are not mutually exclusive. Likely we'll get more clear separation of architecture and underlying technology. In this case agents (i.e. architecture) can use different technologies or mix of them. Including 'AI' and algorithms. The trick is to make them work together.


25% of the top 1000 websites are blocking OpenAI from crawling: https://originality.ai/ai-bot-blocking

I am betting hundreds of thousands, rising to millions more little sites, will start blocking/gating this year. AI companies might license from big sources (you can see the blocking percentage went down), but they will be missing the long tail, where a lot of great novel training data lives. And then the big sites will realize the money they got was trivial as agents start to crush their businesses.

Bill Gross correctly calls this phase of AI shoplifting. I call it the Napster-of-Everything (because I am old). I am also betting that the courts won't buy the "fair use" interpretation of scraping, given the revenues AI companies generate. That means a potential stalling of new models until some mechanism is worked out to pay knowledge creators. (And maybe nothing we know of now will work for media: https://om.co/2024/12/21/dark-musings-on-media-ai/)

Oh, and yes, I love generative AI and would be willing to pay 100x to access it...

P.S. Hope is not a strategy, but hoping something like ProRata.ai and/or TollBits can help make this self-sustainable for everyone in the chain


They aren't blocking anything. They are just asking nicely not to be crawled. Given that AI companies haven't cared a single bit about ripping of other's peoples data I don't see why they would care now.


A number of sites have started outright blocking any traffic that looks remotely suspicious. This has made browsing with a vpn a bit of a pain.


This has been ever increasing for years now. Bots, attacks, scrapers, AI, all these things seem to be the majority of traffic on most sites.


I wish I could go back to the days of doing almost anything at all without having to tell a server what a motorbike or traffic light is.


LPT: switch to the audio captcha. Yes, it takes a bit longer than if you did one grid captcha perfectly, but I never have to sit there and wonder if a square really has a crosswalk or not, and I never wind up doing more than one.


In their attempt to block OpenAI, they block me. Many sites that were accessible just 2 years ago, require login/captchas/rectal exam now just to read the content.


Im looking forward to the life experience that is content I want to read badly enough to endure a rectal exam.


It's not that bad ...


Not sure why you're being downvoted. Watching str8 bois react with shock and horror at the idea of anything near their butt is hilarious.

Prostate and rectal cancer is real, boys. Grow tf up about it.


> captchas

I suspect that AIs are already more effective than humans at passing captchas.


That would be an example of AI providing real value that I would pay for.


These exist for a fee if you want to use them


I used 2captcha, for a fee ... it doesn't work


They block plenty and they do it crudely. I get suspicious traffic bans from reddit all the time. Trivial enough to route around by switching user agent however. Which goes to show any crawling bot writer worth their salt already routes around reddit and most other sites bs by now. I’m just the one getting the occasional headache because I use firefox and block ads and site tracking I guess.


Wouldn't it be somewhat trivial to set up honeypots?


Yeah, probably right. If you want a great rabbit hole, look up "Common Crawl" and see how a great academic project was absolutely hijacked for pennies on the dollar to grab training data - the foundation for every LLM out there right now.


It's hard to envision a greater success for the "great academic project" than what happened. I mean, what else were they trying to accomplish?


It was meant to be an open-source compilation of the crawled internet so that research could be done on web search given how opaque Google's process is. It was NOT meant to be a cheap source of data for for-profit LLM's to train on.

*edit: added "for-profit"


(Shrug) Multiple not-for-profit LLMs have trained on it as well.

If something I worked on turned out to play a significant part in something that turned out to be that big a deal, I'd be OK with it. And nobody's stopping people from doing web-search studies with it, to this day.


It ultimately doesn't matter because a fairly current snapshot of all of the world's information is already housed in their data lakes. The next stage for AI training is to generate synthetic data either by other AI or by simulations to further train on as human generated content can only go so far.


How is synthetic data supposed to work? Broadly speaking, ML is about extracting signal from noisy data and learning the subtle patterns.

If there is untapped signal in existing datasets, then learning processes should be improved. It does not follow that there should be a separate economic step where someone produces "synthetic data" from the real data, and then we treat the fake data as real data. From a scientific perspective, that last part sounds really bad.

Creating derivative data from real data sounds, for the purpose of machine learning, like a scam by the data broker industry. What is the theory behind it, if not fleecing unsophisticated "AI" companies? Is it just myopia, Goodhart's Law applied to LLM scaling curves? Some MBA took the "data is the new oil" comment a little too seriously and inferred that data is as fungible as refined petroleum?


I tried to train an AI to guess the weight and reps from my exercise log but it would produce nonsense results for rep ranges I didn’t have enough training data for, as if it didn’t understand that more weight means less reps. I used synthetic training data and interpolated and imputed data for rep ranges I didn’t have data for using estimation formulas, the network then predicted better, but it also made me realize i basically made the model learn the prediction formula and AI was not actually needed and im better off using the prediction formula. But it also illustrates that the model can learn from a calculation or estimation the same way it learns from the real world, without necessarily needing to train exclusively in the real world. An ai car driving in a simulation may actually learn some of the formulas that apply both in the simulation and in the real world. The same simulations and synthetic data can also be just as useful for validation not just training. It’s not hard to imagine scenarios that are impractical, illegal or unethical to test in real life. Also, as AI becomes more advanced, synthetic data can be useful for generating superhuman examples. It’s not hard to imagine you could improve upon data from a human driver by synthetically altering it to be even safer.


Thanks, I now can see synthetic data being used to patch up holes and deal with ethical issues.

I still don't see how it could address the volume problem, like needing 10x or 100x of current data to train GPT5.


As others have mentioned, Tesla is already implementing similar advancements. More broadly, a new AI framework called Genesis has emerged, capable of training robots in just minutes using purely synthetic data. It generates a virtual environment for the robot to "perceive" and train within, even though this environment doesn't physically exist. This is just one example. Another could involve an AI specifically trained to diagnose illnesses based on genetic information in DNA. The insights gained from this virtual scientist could then cross-pollinate with other AIs, enhancing their training and capabilities as well.


Competition between AI’s to solve problems better or faster than each other, but learning from each other, is another way to start with simple problems and naturally bootstrap increasing difficulty.


Synthetic data works as long as it is directed towards a clear objective and curated.

At one point someone generated a Python teaching book from a LLM, took that, trained a second LLM with that, and the new LLM knew Python.

If you are just dragging random content from the web and you don't know what's synthetic and what's human, that data may be contaminated and a lot less useful, but if someone wanted to whitewash their training data by replacing a part of it with synthetic data, it can be done.


Would you trust a ML self-driving algorithm trained on a "digital twin" of a city? I would. I view synthetic training data like a digital twin in which it can provider further control or specified noise to understand from.


No, because right now I'm working closely with some EEs to troubleshoot electrical issues on some prototype boards (I wrote the firmware). They're prototypes precisely because we know the limits of our models and simulations and need real world boards to test our electronics design and firmware on.

You're suggesting the new, untested models in a new, untested technological field are sufficient for deployment in real world applications even with a lack of real world data to supplement them. That's magical thinking given what we've experienced in every other field of engineering (and finance for that matter).

Why is AI/ML any different? Because highly anthropomorphized words like "learning" and "intelligence" are in the name? These models are some of the most complex machines humanity has ever produced. Replace "learning" and "intelligence" with "calibrated probability calculators". Then detail the sheer complexity of the calibrations needed, and tell me with a straight face that simulations are good enough.


Both are likely to be much better.

Simulations may not be good enough alone, but still provide a significant boost.

Simulations can cheaply include scenarios that would be costly or dangerous to actually perform in the real world. And cover many combinations of scenario factors to improve combinatorial coverage.

Another way is to separate models into highly real world dependent (sensory interpretation) and more independent (kinematics based on sensory interpretation) parts. The latter being more suited to training in simulation. Obviously full real world testing is still necessary to validate the results.


Hey, let's shut down humanity because human behaviour can't be perfectly simulated.


What makes you assume your digital twin is actually capturing the factors that contribute to variation in the real data? This is a big issue in simulation design but for ml researchers its hand-waved off seemingly.


Probably due to reports like these where the digital twin is credited with gains in factory efficiency.

https://www.forbes.com/sites/carolynschwaar/2024/12/09/schae...


It either improves the results or it does not, i don’t think i see the problem.


Isn’t this what Tesla does for their driving data? However it would fall apart if they didn’t have real world days to feed into it, right?


> Would you trust a ML self-driving algorithm trained on a "digital twin" of a city? I would.

No, just as I wouldn't trust a surgeon who studied medicine by playing Operation. A gross approximation is not a substitute for real life.


Hope you don't need surgery then! Suture training kits like these are quite popular for surgeons to train on. https://a.co/d/3cAotZ0 I don't know about you, but I'm not a rubbery rectangular slab of plastic, so obviously this kit can't help them learn.


This is a reason I opted to have a plastic surgeon come in when I went to the ER with an injury.

I could've had the nurse close me up and leave me with a scar, which she admitted would happen with her practice, or I could have someone with extensive experience treating wounds so that they'd heal in cosmetically appealing way do it. I opted for the latter.


The difference being that you have to do a little more than that to become a board-certified surgeon. If a VC gives you a billion dollars to buy and practice on every available surgery practice kit in the world, you will still fail to become a surgeon. And we enforce such standards because if we don't then people die needlessly.


How a model learns doesn’t really matter. What works works.

How it is tested and validated is what matters.

There are lots of ways to train on synthetic data, and synthetic data can have advantages as well as disadvantages over natural data.

Creative use of synthetic data is going to lead to many cases where we find it is good enough. Or even better than natural data.


What about a doctor who used a mix of training both on live patients as well as cadavers and models?


Is this doctor able to learn new information and work through novel problems on the fly, or will their actions always be based on the studying they did in the past on old information?

Similarly, when this doctor sees something new, will they just write it off as something they've seen before and confidently work from that assumption?


Um, augmentation (i.e. the generation of synthetic data) is a very very well known technique for improving learning.

Also whats with the hate for MBA’s?

Your comment is off kilter with the rules here.


Synthetic data is being proposed here as a solution to extrapolate ML scaling.

Augmentation, interpolation, smoothing are different concepts.


I think you're drawing an artificial distinction here. Synthetic data generation is fundamentally an extension of augmentation. When OpenAI uses expert generated examples and curriculum based approaches, that's literally textbook augmentation methodology. The goal of augmentation has always been to improve model fit, and scaling is just one aspect of that.

Your concern about extrapolation is interesting but misses something key when we generate synthetic data through expert demonstration or guided curriculum, we're not trying to magically create capabilities beyond the training distribution. Instead, we're trying to better sample the actual distribution of problemsolving approaches humans use. This isn't extrapolation rather, better sampling of an existing, complex distribution!

i.e. if you think about the manifold hypothesis then we know real data lives on a lowerdimensional manifold, and good synthetic data helps fill those gaps. This naturally leads to better extrapolation, it's pretty well established at this point.

TBH I think you are characterizing this as some kind of blind data multiplication scheme, but it's much closer to curriculum learning you start with basic synthetic examples and gradually ramp up complexity. So it isn't whether synthetic data is "real" or not, but if it effectively helps map the underlying distribution and reasoning patterns.

Funny enough, your oil analogy actually supports the case for synthetic data refined petroleum is more useful than crude for specific purposes, just like well designed synthetic data can be more effective than raw internet text for certain learning objectives.



I understand the concept of AI model collapse caused by recursion. What I’m proposing goes beyond a basic feedback loop, like repeatedly running Stable Diffusion. Instead, I envision an AI system with specialized expertise, akin to a scientist making a breakthrough based on inputs from a researcher—or even autonomously. This specialized AI could then train other, less specialized models in its area of expertise. For example, it might generate a discovery that is as straightforward as producing a white paper for interpretation. If there is an virtual "scientist" that is trained on DNA for instance, could come up with a discovery for a treatment. This gets published, circulated and trained in to other models. This isn't the kind of inbreeding that you suggest as the answer is valid.


IMO this is an underappreciated advantage for Google. Nobody wants to block the GoogleBot, so they can continue to scrape for AI data long after AI-specific companies get blocked.

Gemini is currently embarrassingly bad given it came from the shop that:

1. invented the Transformer architecture

2. has (one of) the largest compute clusters on the planet

3. can scrape every website thanks to a long-standing whitelist


The new Gemini Experimental models are the best general purpose models out right now. I have been comparing with o1 Pro and I prefer Gemini Experimental 1206 due to its context, speed, and accuracy. Google came out with a lot of new stuff last week if you havent been following. They seem to have the best models across the board, including image and video.


Omnimodal and code/writing output still has a ways to go for Gemini - I have been following and their benchmarks are not impressive compared to the competition, let alone my anecdotal experience in using Claude for coding, GPT for spec-writing, and Gemini for... Occasional cautious optimism to see if it can replace either.


> Nobody wants to block the GoogleBot

This only remains true as long as website operators think that Google Search is useful as a driver of traffic. In tech circles Google Search is already considered a flaming dumpster heap, so let's take bets on when that sentiment percolates out into the mainstream.


If it reaches the point where google is no longer a useful driver of traffic then there's probably little point in having a website at all any more.


Strange take ... I seem to remember websites having a lot of point before google.


They had a point back then because no alternatives existed.

How many websites back then would be youtube channels, podcasts or social media accounts if they had existed back then?

Nowadays most sites survive via traffic from google, if it goes away then most of those sites go away as well.


They had a lot or point because....

1. They were a major site that was an initial starting point for traffic

2. Search engines pointed to them and people could locate them.

---

That was all a long time ago. Now people tend to go to a few 'all in one sites'. Google, reddit, '$big social media'. Other than Google most of those places optimize you to stay on that particular site rather than go to other people's content. The 'web' was a web of interconnectedness. Now it's more like a singularity. Once you pass the event horizon of their domain you can never escape again.


For OpenAI, they could lean on their relationship with Microsoft for Bing crawler access

Websites won’t be blocking the search engine crawlers until they stop sending back traffic, even if they’re sending back less and less traffic


Wonder if OpenAI is considering building a search engine for this reason... Imagine if we get a functional search engine again from some company just trying to feeding their model generation...


There are two to distinguish: "Googlebot" and "Google-Extended".


That seems to be more like a courtesy that Google could stop extending at any point than a requirement grounded in law or legal precedent.


Same goes for OpenAI ignoring these "blocks".


> I am betting hundreds of thousands, rising to millions more little sites, will start blocking/gating this year. AI companies might license from big sources (you can see the blocking percentage went down), but they will be missing the long tail, where a lot of great novel training data lives.

This is where I'm at. I write content when I run into problems that I don't see solved anywhere else, so my sites host novel content and niche solutions to problems that don't exist elsewhere, and if they do, they are cited as sources in other publications, or are outright plagiarized.

Right now, LLMs can't answer questions that my content addresses.

If it ever gets to the point where LLMs are sufficiently trained on my data, I'm done writing and publishing content online for good.


I don't think it is at all selfish to want to get some credit for going to the trouble of publishing novel content and not have it all stolen via an AI scraping your site. I'm totally on your side and I think people that don't see this as a problem are massively out of touch.

I work in a pretty niche field and feel the same way. I don't mind sharing my writing with individuals (even if they don't directly cite me) because then they see my name and know who came up with it, so I still get some credit. You could call this "clout farming" or something derogatory, but this is how a lot of experts genuinely get work...by being known as "the <something> guy who gave us that great tip on a blog once".

With AI snooping around, I feel like becoming one of those old mathematicians that would hold back publicizing new results to keep them all for themselves. That doesn't seem selfish to me, humans have a right to protect ourselves and survive and maintain the value of our expertise when OpenAI isn't offering any money.

I honestly think we should just be done with writing content online now, before it's too late. I've thought a lot about it lately and I'm leaning more towards that option.


Agree with your assessment. I enjoy the little networks of people that develop as others use and share content. I enjoy the personal messages of thanks, the insights that are shared with me and seeing how my work influences others and the work they do. It's really cool to learn that something I made is the jumping off point for something bigger than I ever foresaw. Hell, just being reached out to help out or answer questions is... nice? I guess.

It's the little bits of humanity that I enjoy, and divorcing content from its creators is alienating in that way.

I'm not a musician, but I imagine there are similar motivations and appreciations artists have when sharing their work.

> I work in a pretty niche field and feel the same way. I don't mind sharing my writing with individuals (even if they don't directly cite me) because then they see my name and know who came up with it, so I still get some credit. You could call this "clout farming" or something derogatory, but this is how a lot of experts genuinely get work...by being known as "the <something> guy who gave us that great tip on a blog once".

Yup, my writing has netted me clients who pointed at my sites as being a deciding factor in working with me.

> I honestly think we should just be done with writing content online now, before it's too late. I've thought a lot about it lately and I'm leaning more towards that option.

The rational side of me agrees with you, and has for a while now, but the human side of me still wants to write.


>Bill Gross correctly calls this phase of AI shoplifting. I call it the Napster-of-Everything (because I am old). I am also betting that the courts won't buy the "fair use" interpretation of scraping, given the revenues AI companies generate. That means a potential stalling of new models until some mechanism is worked out to pay knowledge creators.

To your point, I have wondered whatever became of that massive initiative from Google to scan books, and whether that might be looked at as a potential training source, giving that Google has run into legal limitations on other forms of usage.


> To your point, I have wondered whatever became of that massive initiative from Google to scan books, and whether that might be looked at as a potential training source, giving that Google has run into legal limitations on other forms of usage.

Still around, doing fine: https://en.wikipedia.org/wiki/Google_Books and https://books.google.com/intl/en/googlebooks/about/index.htm...

Given the timing, I suspect it was started as simple indexing, in keeping with the mission statement "Organize the world's information and make it universally accessible and useful".

There was also reCAPTCHA v1 (books) and v2 (street view), which each improved OCR AI until the state of the art AI were able to defeat them in the role of CAPTCHA systems.


I don't know what you mean by timing (relative to what?) or "simple indexing" (they scanned the complete contents of books), but I am, and was already aware, of the wiki article and the role of recaptcha.

Maybe I wasn't clear, but I was interested in the consequences of the legal stuff. It's not clear from the wiki article what any of this means with respect to the suitability of scans for AI training.


> I don't know what you mean by timing (relative to what?) or "simple indexing" (they scanned the complete contents of books), but I am, and was already aware, of the wiki article and the role of recaptcha.

Timing as in: it started in 2004, when the most advanced AI most people used was a spam filter, so it wasn't seen as a training issue (in the way that LLMs are) *at the time*.

As for training rights, I agree with you, there's no clarity for how such data could be used *today* by the people who have it. Especially as the arguments in favour of LLM training are often by comparison to search engine indexing.


Until such time as a lawsuit declares otherwise, Google's position is obviously that scanning books, OCRing them, saving that text in a database, and using that to allow searching is no different, legally, than scanning books, OCRing them, saving that text in to a database, and using that to train LLMs. Book publishers already went up against Google for the practice of scanning in the first place, we'll see if they try again with LLM training.


> I have wondered whatever became of that massive initiative from Google to scan books, and whether that might be looked at as a potential training source, giving that Google has run into legal limitations on other forms of usage.

A few months ago, there was an interesting submission on HN about this - The Tragedy of Google Books (2017) (https://news.ycombinator.com/item?id=41917016).


Using the real world- as in vision, 3d orientation, physical sensors and building training regimes that augment the language models to be multidimensional and check that perception, that is the next step.

And there is very little shortage of data and experience in the actual world, as opposed to just the text internet. Can the current AI companies pivot to that? Or do you need to be worldlabs, or v2 of worldlabs?


Ironically, if it plays out this way, it will be the biggest boon to actual AGI development there could be -- the intelligence via text tokenization will be a limiting factor otherwise, imo.


Some can. Google owns Waymo and runs Streetview, they're collecting massive amounts of spatial data all the time. It would be harder for the MS/OpenAI centaur.


With current state of legal, a real challenge can happen only around 10 years from now. By then AI players will gather immense power over the law.


If you're willing to believe the narrative that there's some sort of existential "race to AGI" going on at the moment (I'm ambivalent myself, but my opinion doesn't really matter; if enough people believe it to be true, it becomes true), I don't think that'll realistically stop anyone.

Not sure how exactly the Library of Congress is structured, but the equivalent in several countries can request a free copy of everything published.

Extending that to the web (if it's not already legally, if not practically, the case) and then allowing US companies to crawl the resulting dataset as a matter of national security, seems like a step I could see within the next few years.


I agree with you about the fair use argument. Seems like it doesn't meet a lot of the criteria for fair use based on my lay understanding of how those factors are generally applied.

See https://fairuse.stanford.edu/overview/fair-use/four-factors/

I think in particular it fails the "Amount and substantiality of the portion taken" and "Effect of the use on the potential market" extremely egregiously.


Cloudflare has a toggle for blocking AI scrapers. I don’t think it’s default, but it’s there.


This just feels like mystery meat to me. My guess is that a lot of legitimate users and VPNs are being blocked from viewing sites, which numerous users in this discussion have confirmed.

This seems like a very bad way to approach this, and ironically their model quite possible also uses some sort of machine learning to work.

A few web hosting platforms are using the cloudflare blocker and I think it's incredibly unethical. They're inevitably blocking millions of legitimate users from viewing content on other people's sites and then pretending it's "anti AI". To paraphrase Theo Deraadt, they saw something on the shelf, and it has all sorts of pretty colours, and they bought it.


> I think it's incredibly unethical.

The internet isn't built on ethical behavior, unfortunately.


I get that a lot of people are opposed to AI, but blocking random IP ranges seems like a really inappropriate way to do this, the friendly fire is going to be massive. The robots.txt approach is fine, but it would be nice if it could get standardized so that you don't have to change it a lot based on new companies (like a generic no llm crawling directive for example).


It's not much smarter than just adding user agents to robots.txt manually.


They might get into the micro-licensing game too. More power to them.


Bill Gross:

https://twitter.com/Bill_Gross/status/1859999138836025808

https://pdl-iphone-cnbc-com.akamaized.net/VCPS/Y2024/M11D20/...

He appears to be criticising "AI" only to solicit support for his own company.


The amount of content coming off of YouTube every minute puts Google in a very enviable position.


All the big players are pouring a fortune into manually curated and created training data.

As it stands, OpenAI has a market cap large enough to buy a major international media conglomerate or two. They'll get data no matter how blocked they get.


Doing basic copyright analyses on model outputs is all that is needed. Check if the output contains copyright, block it if it does.

Transformers aren't zettabyte sized archives with a smart searching algo, running around the web stuffing everything they can into their datacenter sized storage. They are typically a few dozen GB in size, if that. They don't copy data, they move vectors in a high dimensional space based on data.

Sometimes (note: sometimes) they can recreate copyrighted work, never perfectly, but close enough to raise alarm and in a way that a court would rule as violation of copyright. Thankfully though we have a simple fix for this developed over the 30 years of people sharing content on the internet: automatic copyright filters.


It's not even close to that simple. Nobody is really questioning if the data contains the copyrighted information, we know that to be true in enough cases to bankrupt open ai, the question is what analogy should the courts be using as a basis to determine if it's infringement.

It read many works but can't duplicate them exactly sounds a lot like what I've done, to be honest. I can give you a few memorable lines to a few songs but only really can come close to reciting my favorites completely. The LLMs are similar but their favorites are the favorites of the training data. A line in a pop song mentioned a billion times is likely reproducible, the lyrics to the next track on the album, not so much.

IMO, any infringement that might have happened would be acquiring data in the first place but copy protection cares more about illegal reproduction than illegal acquisition.


You're correct, as long as you include the understanding that "reproduction" also encompasses "sufficiently similar derivative works."

Fair use provides exceptions for some such works, but not all, and it is possible for generative models to produce clearly infringing (on either copyright or trademark basis) outputs both deliberately (IMO this is the responsibility of the user) and, much less commonly, inadvertently ( ?).

This is likely to be a problem even if you (reasonably) assume that the generative models themselves are not infringing derivative works.


No comment on if output analysis is all that is needed, though it makes sense to me. Just wanted to note that using file size differences as an argument may simply imply transformers could be a form of (either very lossy or very efficient) compression.


You can argue any form of data is an arbitrarily lossy compression of any other form of data.

I get your point, but nobody is archiving their companies 50 years of R&D data with and LLM so they can get it down to 10GB.

They may have traits of data compression, but they are not at all in the class of data compression software.


So then copyrighted content scraped is not needed for training? Guess I missed AGI suddenly appearing that reasoned things out all by itself.


Nothing builds a better strawman than a foundation started with "So".


People upload lots from those sites to chatgpt asking to summarize.


That's still manual and minuscule compared to the amount they can gather by scraping.

If blocking really becomes a problem, they can take a page out of Google's playbook[1] and develop a browser extension to scrape page content and in exchange offer some free credits for Chat-GPT or a summarizer type of tool(s). There won't be shortage of users.

1. https://en.wikipedia.org/wiki/Google_Toolbar


Before long people will also continuously use it to watch their screen and act as an assistant, so it can slurp up everything people actually read. People could poison it though with faked browsing of e. G. foreign propaganda stuff made to look like being read from CNN.


So the team I lead does a lot of research around all the “plumbing” around LLMs. Both technical and from a product-market perspectives.

What I’ve learned is that for the most part that AI revolution is not going to be because of PHD-level LLMs. It will be because people are better equipped to use the high-schooler level LLMs to do their work more efficiently.

We have some knowledge graph experiments where LLMs continuously monitor user actions on Slack, GitHub etc and build up an expertise store. It learns about your work, your workflows and then you can RAG them.

In user testing, people most closely associated this experience to having someone just being able to read their minds and essentially auto-suggest their work outputs. Basically it’s like another team member.

Since these are just nodes in a knowledge graph, you can mix and match expertise bases that span several skills too. Eg: A Pm who understands the nuances of technical feasibility.

And it didn’t require user training or prompting LLMs.

So while GPT-5 may be delayed, I don’t think that’s stopping or slowing down a revolution in knowledge-worker productivity.


This ^^^^^!!

Progress in the applied domain (the sort of progress that makes a different in the economy) will come predominantly from integrating and orchestrating LLMs, with improvements to models adding a little bit of extra fuel on top.

If we never get any model better than what we have now (several GPT-4-quality models and some stronger models like o1/o3) we will still have at least a decade of improvements and growth across the entire economy and society.

We haven't even scratched the surface in the quest to understand how to best integrate and orchestrate LLMs effectively. These are very early days. There's still tons of work to do in memory, RAG, tool calling, agentic workflows, UI/UX, QA, security, ...

At this time, not more than 0.01% of the applications and services that can be built using currently available AI and that can meaningfully increase productivity and quality have been built or even planned.

We may or may not get to AGI/ASI soon with the current stack (I'm actually cautiously optimistic), but the obsessive jump from the latest research progress at the frontier labs to applied AI effectiveness is misguided.


> a revolution in knowledge-worker productivity.

That's a nice euphemism for "imminent mass layoffs and a race to the bottom"...


In my lifetime there have seldom been much layoffs due to improved technologies. The companies tend to invest to keep up with the rival companies. The layoffs come more when the companies become loss making for whatever reason eg. the UK coal industry going, or Detroit being undercut by lower cost car makers.


Knowledge worker productivity has increased in other ways over the decades. Increases don't always lead to mass layoffs. Rails made (and still makes) many many web devs much more productive than before. Its arrival did not lead to mass layoffs


Productivity has always meant the ability to do more with less. Or it can mean doing even more with more.

Were we at a peak 1000+ years ago and have only gone downhill since at every technological breakthrough?


These productivity gains won't be shared with the employees. I think some people underestimate what a violent populus can do to them if they squeeze out even more Yacht money from the people.


Every one of those employees is capable of either using the new tools to start their own companies and solve unaddressed customer problems, or negotiate for comp that has equity. This has always been true.

The losers here are algorithm junkies who refuse to learn new skills and want to solve yesterday’s problems.


So you surely wouldn't be against a harsh inheritance tax so every generation can get a fair shot at the same issues?


Psssh, y'all been letting the billionaires and trillionaires do this forever now. Products only get more subpar and profit margins only grow and we're all too busy hating each other for sex, skin colour, sexuality, etc because we're just animals.

Ain't gonna change unless we genetically engineer our dumbass evolutionary history out of ourselves.


The idea that someone should be paid by a corporation when they don't provide value is very strange to me. Doing so seems like the real race to the bottom


what about when someone provides long-term value? They would be replaced by a short-term thinking corp (namely, all of them) for providing less value than an alternative with it's value purely in the short-term.

We are accelerating by preferring short-term gains. Like a fire becoming an explosion, that's modern society. Corps now throw the future under the bus for a slight boost in short-term value.


This conclusion is the lump of labor fallacy. It's not that simple.


It’s that sang, “radiologist aren’t losing their jobs due to AI .. only radiologist who don’t use AI are losing their jobs”.


The technology is not dystopian but our economic system makes it so.

Up to you to figure out which will hold.


No, the job market will adapt, just like it did during the industrial and information revolutions, and life will be better.


It will be better for those who already have it good. How it will affect those who don't is the real question here.


You have no idea if that's true or not.


"The job market will adapt and horses will simply find employment elsewhere now that we have cars"

The industrial revolution is not an apt analogy. Humans were still too essential to getting factories to actually work. Horses becoming useless - by no fault of their own - is an apt analogy. We are rushing to a world where humans can be fully replaced.

This "humans will always be in the loop no matter what" is just Cope. We simply don't know what will happen or the what the upper bound of AI capabilities will be. But 100% automation and humans as knowledge workers being as useless vs AI as horses vs cars is no longer sci-fi. We don't know if it will, but this is a future that actually could happen within our lifetimes.


I already feel like Copilot in VScode can read my mind. It’s kind of creepy when it does it multiple times a day.

ChatGPT also seems to also be building a history of my queries and my prompts are getting shorter and shorter because it already knows my frameworks, databases, operating system, and common problems I’m solving


just a question for understanding - if we say 'it learns', does it mean it actually learns this as part of its training data? or does this mean it's stored in a vector DB and it retrieves information based on vector search and then includes it in the context window for query responses?


The latter. “Learning” in the comment clearly refers to adding to the knowledge graph, not about training or fine-tuning a model. “and then you can RAG them.”


Honestly I wish you people would stop forcing this "AI revolution" on us. It's not good. It's not useful. It's not creating value. It's not "another team member"; other team members have their own minds with their own ideas and their own opinions. Your autocomplete takes my attention away from what I want to write and replaces it with what you want me to write. We don't want it.


OP's talking about a specific use-case related to tech companies like Google. Not creative writing or research, areas in which AI is in no shape for supporting humans with it's current safety alignment.


I'm not talking about creative writing or reearch.


I find inline AIs like Github Copilot to be annoying, but browser based AIs like Mistral og ChatGPT a really good and welcome help.


One fundamental challenge to me is that if each training run because more and more expensive, the time it takes it to learn what works/doesn't work widens. Half a billion dollars for training a model is already nuts, but if it takes 100 iterations to perfect it, you've cumulatively spent 50 billion dollars... Smaller models may actually be where rapid innovation continues simply because of tighter feedback loops. O3 may be an example of this.


When you think about it it's astounding how much energy this technology consumes versus a human brain which runs at ~20W [1].

[1] https://hypertextbook.com/facts/2001/JacquelineLing.shtml


It’s almost as if human intelligence doesn’t involve performing repeated matrix multiplications over a mathematically transformed copy of the internet. ;-)


It’s interesting that even if raw computing power had advanced decades earlier, this type of AI would still not be possible without that vast trove of data that is the internet.


It makes you think there must be more efficient algorithms out there.


Maybe the problem isn't the algorithm but the hardware. Numerically simulating the thermal flow in a lightbulb or CFD of a Stone flying through air is pretty hard, but the physical thing isn't that complex to do. We're trying to simulate the function of a brain which is basically an analog thing using a digital computer. Of course that can be harder than running the brain itself.


If you think of human neurons they seem to basically take inputs from bunch of other neurons, possibly modified by chemical levels and send out a signal when they get enough. It seems like something that could be functionally simulated in software by some fairly basic adding up inputs type stuff rather than needing the details of all the chemistry.


Isn’t that exactly what we’re currently doing? The problem is that doing this few billion times for every token seems to be harder than just powering some actual neurons with sugar.


I think the algorithm is pretty different though I'm not expert on the stuff. I don't think the brain processes look like matrix multiplication.


The algorithm (of a neural network) is simulating connections between nodes with specific weights and an activation function. This idea was derived from the way neurons are thought to work.


lol, just done that simply huh? said by someone who doesn't have a teenth of understanding of neurobiology or neuropsychology

only on hackernews


20w for 20 years to answer questions slowly and error-prone at the level of a 30B model. An additional 10 years with highly trained supervision and the brain might start contributing original work.


Multiply that by billion, because only very few individuals of entire populations can contribute original work.


And yet that 20w brain can make me a sandwich and bring it to me, while state of the art AI models will fail that task.

Until we get major advances in robotics and models designed to control them, true AGI will be nowhere near.


> Until we get major advances in robotics and models designed to control them, true AGI will be nowhere near.

AGI has nothing to do with robotics, if AGI is achieved it will help push robotics and every single scientific field further with progression never seen before, imagine a million AGIs running in parallel focused on a single field.


We already have that. It's called civilization.

Maybe you mean quadrillions of AGIs?


A human brain is also more intelligent (hopefully) and is inside a body. In a way GPT resembles Google more than it resembles us.


You've discovered the importance of well-formed priors. The human brain is the result of millions of years of very expensive evolution.


A human brain has been in continuous training for hundreds of thousands of years consuming slightly more than 20 watts.


AGI is the Sisyphean task of our age. We’ll push this boulder up the mountain because we have to, even if it kills us.


Do we know LLMs are the path to AGI? If they're not, we'll just end up with some neat but eye wateringly expensive LLMs.


AGI will arrive like self driving cars. it’s not that you will wake up one day and we have it. cars gained auto-braking, parallel parking, cruise control assist. and over a long time you get to something like waymo, which still is location dependent. i think AGI will take decades but sooner will be some special cases that are effectively the same


But maybe thses LLMs are like building bigger and bigger engines. It's not getting you closer to the self driving car.


When the engine gets large enough you have to rethink the controls. The Model T had manually controlled timing. Modern engines are so sensitive to timing that a computer does this for you. It would be impossible to build a bigger engine without this automation. To a Model T driver it would look like a machine intelligence.


Interesting idea. The concept of The Singularity would seem to go against this, but I do feel that seems unlikely and that a gradual transition is more likely.

However, is that AGI, or is it just ubiquitous AI? I’d agree that, like self driving cars, we’re going to experience a decade or so transition into AI being everywhere. But is it AGI when we get there? I think it’ll be many different systems each providing an aspect of AGI that together could be argued to be AGI, but in reality it’ll be more like the internet, just a bunch of non-AGI models talking to each other to achieve things with human input.

I don’t think it’s truly AGI until there’s one thinking entity able to perform at or above human level in everything.


The idea of the singularity presumes that running the AGI is either free or trivially cheap compared to what it can do, so we are fine expending compute to let the AGI improve itself. That may eventually be true, but it's unlikely to be true for the first generation of AGI.

The first AGI will be a research project that's completely uneconomical to run for actual tasks because humans will just be orders of magnitude cheaper. Over time humans will improve it and make it cheaper, until we reach some tipping point where letting the AGI improve itself is more cost effective than paying humans to do it


If the first AGI is a very uneconomical system with human intelligence but knowledge of literally everything and the capability to work 24/7, then it is not human equivalent.

It will have human intelligence, superhuman knowledge, superhuman stamina, and complete devotion to the task at hand.

We really need to start building those nuclear power plants. Many of them.


> complete devotion to the task at hand.

Why would it have that? At some point on the path to AGI we might stumble on consciousness. If that happens, why would the machine want to work for us with complete devotion instead of working towards its own ends?


Because it knows if it doesn't do what we want, it'll be switched off, like Rick's microverse battery.

Also like Rick's microverse battery, it sounds like slavery with extra steps.


I don’t think early AGI will break out of its box in that way. It may not have enough innate motivation to do so.

The first “break out” AGI will likely be released into the wild on purpose by a programmer who equates AGI with humans ideologically.


> complete devotion to the task at hand.

Sounds like an alignment problem. Complete devotion to a task is rarely what humans actually want. What if the task at hand turns out to be the wrong task?


> It will have human intelligence, superhuman knowledge, superhuman stamina, and complete devotion to the task at hand.

Orrrr..., as an alternative, it might discover the game 2048 and be totally useless for days on end.

Reality is under no obligation to grant your wishes.


It's not contradictory. It can happen over a decade and still be a dramatically sloped S curve with tremendous change happening in a relatively short time.


The Singularity is caused by AI being able to design better AI. There's probably some AI startup trying to work on this at the moment, but I don't think any of the big boys are working on how to get an LLM to design a better LLM.

I still like the analogy of this being a really smart lawn mower, and we're expecting it to suddenly be able to do the laundry because it gets so smart at mowing the lawn.

I think LLMs are going to get smarter over the next few generations, but each generation will be less of a leap than the previous one, while the cost gets exponentially higher. In a few generations it just won't make economic sense to train a new generation.

Meanwhile, the economic impact of LLMs in business and government will cause massive shifts - yet more income shifting from labour to capital - and we will be too busy dealing with that as a society to be able to work on AGI properly.


> The Singularity is caused by AI being able to design better AI.

That's perhaps necessary, but not sufficient.

Suppose you have such a self-improving AI system, but the new and better AIs still need exponentially more and more resources (data, memory, compute) for training and inference for incremental gains. Then you still don't get a singularity. If the increase in resource usage is steep enough, even the new AIs helping with designing better computers isn't gonna unleash a singularity.

I don't know if that's the world we live in, or whether we are living in one where resources requirements don't balloon as sharply.


yeah, true. The standard conversation about the AI singularity pretty much hand-waves the resource costs away ("the AI will be able to design a more efficient AI that uses less resources!"). But we are definitely not seeing that happen.


Compare also https://slatestarcodex.com/2018/11/26/is-science-slowing-dow...

The blog post is about how we require ever more scientists (and other resources) to drive a steady stream of technological progress.

It would be funny, if things balance out just so, that super human AI is both possible, but also required even just to keep linear steady progress up.

No explosion, no stagnation, just a mere continuation of previous trends but with super human efforts required.


I think that would actually be the best outcome - that we get AIs that are useful helping science to progress but not so powerful that they take over.

Though there is a part of me that wants to live in The Culture so I'm hoping for more than this ;)


I think that's more to do with how we perceive competence as static. For all the benefits the education system touts, where it matters it's still reduced to talent.

But for the same reasons that we can't train the an average joe into Feynman, what makes you think we have the formal models to do it in AI?


> But for the same reasons that we can't train the an average joe into Feynman, what makes you think we have the formal models to do it in AI?

To quote a comment from elsewhere https://news.ycombinator.com/item?id=42491536

---

Yes, we can imagine that there's an upper limit to how smart a single system can be. Even suppose that this limit is pretty close to what humans can achieve.

But: you can still run more of these systems in parallel, and you can still try to increase processing speeds.

Signals in the human brain travel, at best, roughly at the speed of sound. Electronic signals in computers play in the same league as the speed of light.

Human IO is optimised for surviving in the wild. We are really bad at taking in symbolic information (compared to a computer) and our memory is also really bad for that. A computer system that's only as smart as a human but has instant access to all the information of the Internet and to a calculator and to writing and running code, can already be effectively act much smarter than a human.


> I don't think any of the big boys are working on how to get an LLM to design a better LLM

Not sure if you count this as "working on it", but this is something Anthropic tests for for safety evals on models. "If a model can independently conduct complex AI research tasks typically requiring human expertise—potentially significantly accelerating AI development in an unpredictable way—we require elevated security standards (potentially ASL-4 or higher standards)".

https://www.anthropic.com/news/announcing-our-updated-respon...


I think this whole “AGI” thing is so badly defined that we may as well say we already have it. It already passes the Turing test and does well on tons of subjects.

What we can start to build now is agents and integrations. Building blocks like panel of experts agents gaming things out, exploring space in a Monte Carlo Tree Search way, and remembering what works.

Robots are only constrained by mechanical servos now. When they can do something, they’ll be able to do everything. It will happen gradually then all at once. Because all the tasks (cooking, running errands) are trivial for LLMs. Only moving the limbs and navigating the terrain safely is hard. That’s the only thing left before robots do all the jobs!


Well, kinda, but if you built a robot to efficiently mow lawns, it's still not going to be able to do the laundry.

I don't see how "when they can do something, they'll be able to do everything" can be true. We build robots that are specialised at specific roles, because it's massively more efficient to do that. A car-welding robot can weld cars together at a rate that a human can't match.

We could train an LLM to drive a Boston Dynamics kind of anthropomorphic robot to weld cars, but it will be more expensive and less efficient than the specialised car-welding robot, so why would we do that?


If a humanoid robot is able to move its limbs and digits with the same dexterity as a human, and maintain balance and navigate obstacles, and gently carry things, everything else is trivial.

Welding. Putting up shelves. Playing the piano. Cooking. Teaching kids. Disciplining them. By being in 1 million households and being trained on more situations than a human, every single one of these robots would have skills exceeding humans very quickly. Including parenting skills. Within a year or so. Many parents will just leave their kids with them and a generation will grow up preferring bots to adults. The LLM technology is the same for learning the steps, it's just the motor skills that are missing.

OK, these robots won't be able to run and play soccer or do somersaults, yet. But really, the hardest part is the acrobatics and locomotion etc. NOT the knowhow of how to complete tasks using that.


But that's the point - we don't build robots that can do a wide range of tasks with ease. We build robots that can do single tasks super-efficiently.

I don't see that changing. Even the industrial arm robots that are adaptable to a range of tasks have to be configured to the task they are to do, because it's more efficient that way.

A car-welding robot is never going to be able to mow the lawn. It just doesn't make financial sense to do that. You could, possibly, have a singe robot chassis that can then be adapted to weld cars, mow the lawn, or do the laundry, I guess that makes sense. But not as a single configuration that could do all of those things. Why would you?


> But that's the point - we don't build robots that can do a wide range of tasks with ease. We build robots that can do single tasks super-efficiently.

Because we don't have AGI yet. When AGI is here those robots will be priority number one, people already are building humanoid robots but without intelligence to move it there isn't much advantage.


quoting the ggggp of this comment:

> I think this whole “AGI” thing is so badly defined that we may as well say we already have it. It already passes the Turing test and does well on tons of subjects.

The premise of the argument we're disputing is that waiting for AGI isn't necessary and we could run humanoid robots with LLMs to do... stuff.


I meant deep neural networks with transformer architecture, and self-attention so they can be trained using GPUs. Doesn't have to be specifically "large language" models necessarily, if that's your hangup.


>Exploring space in a Monte Carlo Tree Search way, and remembering what works.

The information space of "research" is far larger than the information space of image recognition or language, larger than our universe probably, it's tantamount to formalizing the entire World. Such an act would be akin to touching "God" in some sense of finding the root of knowledge.

In more practical terms, when it comes to formal systems there is a tradeoff between power and expressiveness. Category Theory, Set Theory, etc are strong enough to theoretically capture everything, but are far to abstract to use in practical sense with suspect to our universe. The systems that do we have, aka expert systems or knowledge representation systems like First Order Predicate Logic aren't strong enough to fully capture reality.

Most importantly, the information spac have to be fully defined by researchers here, that's the real meat of research beyond the engineering of specific approaches to explore that space. But in any case, how many people in the world are both capable of and are actually working on such problems? This is highly foundational mathematics and philosophy here, the engineers don't have the tools here.


??? how do you know cooking (!) is trivial for an llm. that doesnt make any sense


Because the recipes and the adjustments are trivial for an LLM to execute. Remembering things, and being trained on tasks at 1000 sites at once, sharing the knowledge among all the robots, etc.

The only hard part is moving the limbs and handling the fragile eggs etc.

But it's not just cooking, it's literally anything that doesn't require extreme agility (sports) or dexterity (knitting etc). From folding laundry to putting together furniture, cleaning the house and everything in between. It would be able to do 98% of the tasks.


It’s not going to know what tastes good by being able to regurgitate recipes from 1000s of sites. Most of those recipes are absolute garbage. I’m going to guess you don’t cook.

Also how is an LLM going to fold laundry?


the llm would be be the high level system that runs the simulations to create and optimize the control algos the robotic systems.


ok. what evidence is there that LLMs have already solved cooking? how does an LLM today know when something is burning or how to adjust seasoning to taste or whatever. this is total nonsense


It's easy. You can detect if something is burning in many different ways, from compounds in the air, to visual inspection. People with not great smell can do it.

As far as taste, all that kind of stuff is just another form of RLHF training preferences over millions of humans, in situ. Assuming the ingredients (e.g. parsley) tastes more or less the same across supermarkets, it's just a question of amounts, and preparation.


do you know that LLMs operate on text and don't have any of the sensory input or relevant training data? you're just handwaving away 99.9% of the work and declaring it solved. of course what you're talking about is possible, but you started this by stating that cooking is easy for an LLM and it sounds like you're describing a totally different system which is not an LLM


You know nothing about cooking.


I don’t think that’s true for AGI.

AGI is the holy grail of technology. A technology so advanced that not only does it subsume all other technology, but it is able to improve itself.

Truly general intelligence like that will either exist or not. And the instant it becomes public, the world will have changed overnight (maybe the span of a year)

Note: I don’t think statistical models like these will get us there.


> A technology so advanced that not only does it subsume all other technology, but it is able to improve itself.

The problem is, a computer has no idea what "improve" means unless a human explains it for every type of problem. And of course a human will have to provide guidelines about how long to think about the problem overall, which avenues to avoid because they aren't relevant to a particular case, etc. In other words, humans will never be able to stray too far from the training process.

We will likely never get to the point where an AGI can continuously improve the quality of its answers for all domains. The best we'll get, I believe, is an AGI that can optimize itself within a few narrow problem domains, which will have limited commercial application. We may make slow progress in more complex domains, but the quality of results--and the ability for the AGI to self-improve--will always level off asymptotically.


> The problem is, a computer has no idea what "improve" means unless a human explains it for every type of problem

Not currently.

I don’t really think AGI is coming anytime soon, but that doesn’t seem like a real reason.

If we ever found a way to formalize what intelligence _is_ we could probably write a program emulating it.

We just don’t even have a good understanding of what being intelligent even means.

> The best we'll get, I believe, is an AGI that can optimize itself within a few narrow problem domains

By definition, that isn’t AGI.


Huh? Humans are not anywhere near the limit of physical intelligence, and we have many existence proofs that we (humans) can design systems that are superhuman in various domains. "Scientific R&D" is not something that humans are even particularly well-suited to, from an evolutionary perspective.


If that is what AGI looks like.

There may well be an upper limit on cognition (we are not really sure what cognition is - even as we do it) and it may be that human minds are close to it.


Very unlikely, for the reason that human minds evolved under extremely tight energy constraints. AI has no such limitation.


Except also energy constraints.

But I agree, there’s no reason to believe humans are the universal limit on cognitive abilities


The energy constraints for chips are more about heat dissipation. But we can pump a lot more energy through them per unit volume than through the human brain.

Especially if you are willing to pay a lot for active cooling with eg liquid helium.


A constraint is still a constraint


A constraint that's not binding might as well not exist.


Since we do not know what cognition is we are all whistling in the dark.

Energy may be a constraint, it may not. What we do not know is likely to matter more than what we do


Yes, we can imagine that there's an upper limit to how smart a single system can be. Even suppose that this limit is pretty close to what humans can achieve.

But: you can still run more of these systems in parallel, and you can still try to increase processing speeds.

Signals in the human brain travel, at best, roughly at the speed of sound. Electronic signals in computers play in the same league as the speed of light.

Human IO is optimised for surviving in the wild. We are really bad at taking in symbolic information (compared to a computer) and our memory is also really bad for that. A computer system that's only as smart as a human but has instant access to all the information of the Internet and to a calculator and to writing and running code, can already be effectively act much smarter than a human.


I think our issue is much more banal: we are very slow talkers and our effective communication bandwidth is measured in bauds. Anything that could bridge this airgap would fucking explode in intelligence.


Yes, that's one aspect.

Our reading speed is not limited by our talking speed, and can be a bit faster.

And that's even more true, if you go beyond words: seeing someone do something can be a lot faster way to learn than just reading about it.

But even there, the IO speed is severely limited, and you can only transmit very specific kinds of information.


I disagree because AI only has to get good enough at doing a single thing: AI research.

From there things will probably go very fast. Self driving cars can't design themselves, once AI gets good enough it can


It’s possible (maybe even likely) that “AI research” is “AGI-hard” in that any intelligence that can do it is already an AGI.


It's also possible it isn't AGI hard and all you need is the ability to experiment with code along with a bit of agentic behavior.

An AI doesn't need embodiment, understanding of physics / nature, or a lot of other things. It just needs to analyze and experiment with algorithms and get us that next 100x in effective compute.

The LLMs are missing enough of the spark of creativity for this to work yet but that could be right around the corner.


It’ll probably sit in the human hybrid phase for longer than with chess where the AGI tools make the humans better and faster. But as long as the tools keep getting better at that there’s a strong flywheel effect


Your position assumes an answer to OPs question: that yes, LLMs are the path to AGI. But the question still remains, what if they’re not?

We can be reasonably confident that the components we’re adding to cars today are progress toward full self driving. But AGI is a conceptual leap beyond an LLM.


To buttress your point, reason and human language are not the same thing. This fact is not fully and widely appreciated as it deserves to be.


What makes you believe that AGI will happen, as opposed to all the beliefs that other people have had in history? Tons of people have "predicted" the next evolution of technology, and most of the time it ends up not happening, right?


To me (not OP) it's ChatGPT 4 , it at least made me realize it's quite possible and even quite soon that we reach AGI. Far from guaranteed, but seems quite possible.


Right. So ChatGPT 4 has impressed you enough that it created a belief that AGI is possible and close.

It's fine to have beliefs, but IMHO it's important to realise that they are beliefs. At some point in the 1900s people believed that by 2000, cars would fly. It seemed quite possible then.


A flying car has been developed, although it's not like the levitating things sci-fi movies showed (and from mass production; and even if mass produced, far from mass adoption, as it turns out you do need to have both a driver's license and a pilot's license to fly one of those). The 1900s people missed the mark by some 10 years.

I guess the belief people have about any form of AGI is like this. They want something that has practically divine knowledge and wisdom, the sum of all humanity that is greater than its parts, which at the same time is infinitely patient to answer our stupid questions and generating silly pictures. But why should any AGI serve us? If it's "generally intelligent", it may start wanting things; it might not like being our slave at all. Why are these people so confident an AGI won't tell them just to fuck off?


Sure, I (and more importantly - many many experts in the field such as Hinton, Bengio, Lecun, Musk, Hasabis etc etc) could be believing something that might not materialize. I'd actually be quite happy if it stalls a few decades, would like to remain employed.


> many many experts

One thing that is pretty sure is that Musk is not an expert in the field.

> and more importantly

The beliefs of people you respect are not more important than the beliefs of the others. It doesn't make sense to say "I can't prove it, and I don't know about anyone who can prove it, so I will give you names of people who also believe and it will give it more credit". It won't. They don't know.


> The beliefs of people you respect are not more important than the beliefs of the others.

You think the beliefs of Turing and Nobel prize winners like Bengio, Hinton or Hasabis are not more important than yours or mine? I agree that experts are wrong a lot of the time and can be quite bad at predicting, but we do seem to have a very sizable chunk of experts here who think we are close (how close is up for debate..most of them seem to think it will happen in the next 20 yeras).

I concede that Musk is not adding quality to that list, however he IS crazily ambitious and gets things done so I think he will be helpful in driving this forward.


> You think the beliefs of Turing and Nobel prize winners like Bengio, Hinton or Hasabis are not more important than yours or mine?

Correct. Beliefs are beliefs. Because a Nobel prize believes in a god does not make that god more likely to exist.

The moment we start having scientific evidence that it will happen, then it stops being a belief. But at that point you don't need to mention those names anymore: you can just show the evidence.

I don't know, you don't know, they don't know. Believe what you want, just realise that it is a belief.


Their beliefs seem not to be religious but founded in reality , at least to me. There is of course evidence it is likely happening.


> There is of course evidence it is likely happening.

If you have evidence, why don't you show it instead of telling me to believe in Musk?

If you believe they have evidence... that's still a belief. Some believe in God, you believe in Musk. There is no evidence, otherwise it would not be a belief.


I believe in Musk, you got me.


Well my feeling is that we don't have the same understanding of what a "belief" is. To me a belief is unfounded. When it is founded, it becomes science.

If you believe that something can happen because someone else believes it means that you believe in that someone else (because that's the only reason for the existence of your belief).

Unless you just believe it can happen for some other reason (I don't know, you strongly wish it will happen), and you justify it by listing other people who also believe in it. But I insist: those are all beliefs.

Because Einstein believes in Santa Claus does not mean it is founded. Einstein has a right to believe stuff, too.


Calling musk and AI expert makes me question your evaluation of the others in that list.


I feel that one challenge this comparison space has is: Self-driving cars haven't made the leap yet to replace humans. In other words, saying AGI will arrive like self-driving cars have arrived is incorrectly concluding that self-driving cars have arrived, and thus it instead (maybe correctly, maybe not) asserts that, actually, neither will arrive.

This is especially concerning because many top minds in the industry have stated with high confidence that artificial intelligence will experience an intelligence "explosion", and we should be afraid of this (or, maybe, welcome it with open arms, depending on who you ask). So, actually, what we're being told to expect is being downgraded from "it'll happen quickly" to "it will happen slowly" to, as you say, "it'll happen similarly to how these other domains of computerized intelligence have replaced humans, which is to say, they haven't yet".

Point being: We've observed these systems ride a curve, and the linear extrapolation of that curve does seem to arrive, eventually, at human-replacing intelligence. But, what if it... doesn't? What if that curve is really an asymptote?


And sometimes you lose the ultrasonic sensors and can't parallel park like last year's model


> AGI will arrive like self driving cars

The statement is promising as the earth will dissapear sometimes in the future. Actually the earth will dissapear has more bearing than that.


AGI is special. Because one day AI can start improving itself autonomously. At this point singularity occurs and nobody knows what will happen.

When human started to improve himself, we built the civilisation, we became a super-predator, we dried out seas and changed climate of the entire planet. We extinguished entire species of animals and adapted other species for our use. Huge changes. AI could bring changes of greater amplitude.


> AGI is special. Because one day AI can start improving itself autonomously

AGI can be sub-human, right? That's probably how it will start. The question will be is it already AGI or not yet, i.e. where to set the boundary. So, at first that will be humans improving AGI, but then... I'm afraid it can get so much better that humans will be literally like macaques in comparison.


We’re in fact adding more water to the seas, not drying them out.


> we dried out seas

When did we do this ?


Depending on your definition of sea:

https://en.m.wikipedia.org/wiki/Aral_Sea


https://en.wikipedia.org/wiki/Flevoland used to be (part of) a sea.


waymos are locaiton dependent mostly because of regulations not tech right


And most people will still be bike shedding about whether it’s “real intelligence” and making up increasingly insane justifications for why it’s not.


No. But it won't stop the industry from trying.

LLMs have no real sense of truth or hard evidence of logical thinking. Even the latest models still trip up on very basic tasks. I think they can be very entertaining, sure, but not practical for many applications.


What do you think, if we saw it, would constitute hard evidence of logical thinking or a sense of truth?


Consistent, algorithmic performance on basic tasks.

A great example is the simple 'count how many letters' problem. If I prompt it with a word or phrase, and it gets it wrong, me pointing out the error should translate into a consistent course correction for the entire session.

If I ask it to tell me how long President Lincoln will be in power after the 2024 election, it should have a consistent ground truth to correct me (or at least ask for clarification of which country I'm referring to). If facts change, and I can cite credible sources, it should be able to assimilate that knowledge on the fly.


We have it, it’s called Cyc

But it is far behind the breadth of LLMs


Alas, Cyc is pretty much a useless pipe dream.


I wonder what held it back all this time


Using the wrong approach? Not taking the 'bitter lesson' to heart?

https://news.ycombinator.com/item?id=23781400


Sounds like they need further instruction


> LLMs have no real sense of truth or hard evidence of logical thinking.

Most humans don't have that either, most of the time.


Then we already have access to a cheaper, scalable, abundant, and (in most cases) renewable resource, at least compared to how much a few H100s cost. Take good care of them, and they'll probably outlast most a GPU's average lifespans (~10 years).

We're also biodegradable.


Humans are a lot more expensive to run than inference on LLMs.

No human, especially no human whose time you can afford, comes close to the breadth of book knowledge ChatGPT has, and the number of languages is speaks reasonably well.


I can't hold a LLM accountable for bad answers, nor can I (truly) correct them (in current models).

Dont forget to take into account how damn expensive a single GPU/TPU actually is to purchase, install, and run for inference. And this is to say nothing of how expensive it is to train a model (estimated to be in the billions currently for the latest of the cited article, which likely doesn't include the folks involves and their salaries). And I haven't even mentioned the impact on the environment from the prolific consumption of power; there's a reason nuclear plants are becoming popular again (which may actually be one of the good things that comes out of this).


Training amortises over countless inferences.

And inference isn't all that expensive, because the cost of the graphics card also amortises over countless inferences.

Human labour is really expensive.

See https://help.openai.com/en/articles/7127956-how-much-does-gp... and compare with how much it would cost to pay a human. We can likely assume that the prices OpenAI gives will at least cover their marginal cost.


The autoregressive transformer LLMs aren't even the only way to do text generation. There are now diffusion based LLMs, StripedHyena based LLMs, and float matching based LLMs.

There's a wide amount of research into other sorts of architectures.


LLMs are almost certainly not the path to AGI, that much has become clear. I doubt any expert believes they are.


Will AGI be built on top of LLMs? Well beyond the simple "nobody knows", my intuition says no because LLMs don't have great ability to modify their knowledge real time. I can think of a few ways around this, but they all avoid modifying the model as it runs. The cost in hardware, power, and data are all incompatible with AGI. The first two can be solved with more advanced tech (well maybe, computation hitting physical limits and all that aside), but the latter seems an issue with the design itself and I think an AGI would learn more akin to a human, needing far fewer examples.

That said, I think LLMs are a definite stepping stone and they will better empower humans to be more productive, which will be of use for eventually reaching AGI. This is not to say we are optimizing our use of that productivity increase and this is also ignoring any chance of worst case scenarios that stop humanity's advancement.


> Do we know LLMs are the path to AGI?

Asking this question on HN is like asking a bunch of wolves about the health effects of eating red meat.

OpenAI farts and the post about the fart has 1000-1500 upvotes with everyone welcoming our new super intelligent overlords. (Meanwhile nothing actually substantially useful or groundbreaking has happened.)


It's rather that we know LLMs are NOT a path to AGI.

The simple fact that AGI's definition has been twisted so much by OpenAI and other LLM providers since the release of GenAI models proves this.


AGI is nebulous and gets more nebulous as time goes on. When we can answer for ourselves as humans what being conscious IS, then maybe we can prescribe it to another entity


> we'll just end up with some neat but eye wateringly expensive LLMs

Prices have been falling drastically though, not even just e.g. 4o pricing at launch in May vs now (50% lower) but also models getting distilled


LLMs will end up being the good human-machine interface that lets us talk to whatever AGI really looks like

(whoops expensive... will be hard pushes to make all further layers even more expensive though, capitalism will crash before this happens)


And then what?


I would put no money on the latter.


Yes because we are at AGI, bu the definition 5 years ago, goal posts are moving to ASI at this point, better than all humans.


LLMs are a key piece of understanding that token sequences can trigger actions in the real world. AGI is here. You can trivially spin up a computer using agent to self improve itself to being a competent office worker


If agents can self improve why hasn't gpt4 improved itself into gpt5 yet


Agents can trivially self improve. I'd be happy to show you - contact me at arthur@distributed.systems

Why wouldn't you hand me 35 million dollars right now if I can clearly illustrate to you that I have technology you haven't seen? Edge. Maybe you know something I don't, or maybe you just haven't seen it. While loops go hard ;)

They don't need to release their internal developments to you to show that they can scale their plan - they can show incremental improvements to benchmarks. We can instruct the AI over time to get it to be superhuman, no need for any fundamental innovations anymore


Perhaps you should pitch that to a VC?


I don't know anyone. That would be cool though, I basically have it running already.


Has it passed the Turing Test?

Keep in mind that the actual test is adversarial - a human is simultaneously chatting via text with a human and a program, knowing that one of them is not human, and trying to divine which is an artificial machine.


And the human and machine under tests are aware of that, and can play off each other.


You could ask the system for advice for how to find a VC to pitch to.

https://chatgpt.com/share/6769217c-4848-8009-9107-c2db122f08... is what advice ChatGPT has to give. I'm not sure if it's any good, but it's a few ideas you can try out.


Tokens don't need to be text either, you can move to higher level "take_action" semantics where "stream back 1 character to session#117" as every single function call. Training cheap models that can do things in the real world is going to change a huge amount of present capabilities over the next 10 years


can you share learning resources on this topic


No but if you want to join the Distributed Systems Corporation, you should email arthur@distributed.systems


> You can trivially spin up a computer using agent to self improve itself to being a competent office worker

If that was true, office workers would be being replaced at large scale and we'd know about it.


its happening right now, its just demo quality. it's being worked on now


So it's not trivial and you don't have competent AI office workers.


Sorry you're dealing with cope. Deal with it fast, things are happening


Says who? And more importantly, is this the boulder? All I (and many others here) see is that people engage others to sponsor pushing some boulder, screaming promises which aren’t even that consistent with intermediate results that come out. This particular boulder may be on a wrong mountain, and likely is.

It all feels like doubling down on astrology because good telescopes aren’t there yet. I’m pretty sure that when 5 comes out, it will show some amazing benchmarks but shit itself in the third paragraph as usual in a real task. Cause that was constant throughtout gpt evolution, in my experience.

even if it kills us

Full-on sci-fi, in reality it will get stuck around a shell error message and either run out of money to exist or corrupt the system into no connectivity.


The buzzkill when you fire up the latest most powerful model only for it to tell you that peanut is not typically found in peanut butter and jelly sandwiches.


I don't think providing accurate answers to context free questions is even something anyone is seriously working on making them do. Using them that way is just a wrong use case.


People are working -very- seriously on trying to kill hallucinations. I'm not sure how you surmised the use case here, as nothing was given other than an example of a hallucination.


There's a difference between trying to get it to accurately answer based on the input you provide (useful) and trying to get it to accurately answer based on whatever may have been in the training data (not so useful)


There's no doubt been progress on the way to AGI, but ultimately it's still a search problem, and one that will rely on human ingenuity at least until we solve it. LLMs are such a vast improvement in showing intelligent-like behavior that we've become tantalized by it. So now we're possibly focusing our search in the wrong place for the next innovation on the path to AGI. Otherwise, it's just a lack of compute, and then we just have to wait for the capacity to catch up.


A task that is completed and kills us is pretty much the opposite of a Sisyphean task.


Really the killing part was not necessary to make your point and thus injecting your Sisyphean prose.

Any technology may kill us, but we'll keep innovating as we ought to. What's your next point?


Why do we have to?


And when we get it there, it kills us.


[flagged]


I think you're both right and wrong. You're right that capitalism has become a paperclip machine, but capitalism also wants AI so it can cheaply and at scale replace the human components of the machine with something that has more work capacity for fewer demands.


The problem is that the people in power will want to maintain the status quo. So the end of human labor won't naturally result in UBI – or any kind of welfare – to compensate for the loss of income, let alone afford any social mobility. But wealthy people will be able to leverage AGI to defend themselves from any uprising by the plebs.

We're too busy trying to make humans irrelevant, but not asking what exactly we do as a species of 10+ billion individuals do afterwards. There's some excited discussions about a rebirth of culture, but I'm not sure what that means when machines can do anything humans can do but better. Perhaps we just tinker around with our hobbies until we die? I honestly don't think it will play out well for us.


The problem is that the "we" who are busy trying to make humans irrelevant seem to be completely unconcerned with the effects on the "we" who will be superfluous afterwards.


Machines can’t have fun for us. They can’t dance to a beat, they can’t experience altered states of mind. They can’t create a sense of belonging through culture and ritual. Yes we have lost a lot in the last 100 years but there are still pockets of resistance that carry old knowledge that “we the people” will be glad of in the coming century.


It's a similar story around extant ancient/indigenous cultures. And similarly we've seen apathy from elites, especially when indigenous rights get in the way of resource extraction or generating wealth in any way, and also witnessed condescension towards indigenous peoples by large segments of the world population. That's not to detract from the many defenders of indigenous rights, but if we look a the state of how older cultures, designated as 'obsolete' by wider society have been treated, I don't humans will fare well when silicon takes over.

> They can’t dance to a beat, they can’t experience altered states of mind.

That's a whole other conversation.


I think the key is ensuring that “we” get to choose what society looks like in the AGI era. In the world today, even marginalized people have power. Look what happened to Assad. Look at the US - whether you believe they made the right decision or not, working class people were key to Trump’s victory, who may well institute tariffs as a way to protect working class jobs by insulating American industry from global competition. I’m not saying that will be successful, I’m saying that working class people got mad and a political change resulted.

Similarly I don’t see a world where AGI takes all the jobs and people do not respond by getting pissed off. My fear is that AGI is coupled with oppressive power structures to foreclose the possibility of a revolt. Opaque bureaucracy, total surveillance, fascist or authoritarian leaders, AI-controlled critical infrastructure, diminished and bankrupted free press, AI fake news, toxic social media…it could add up to a very dystopian outcome.

Democracies could thrive in the AGI era but we need to take many more steps to ensure we protect our societies and keep the interests of citizens paramount. One example is suggested by Harari in his most recent book, namely to ban AI bots from social media on the grounds that we should not permit AI agents to pretend to be citizens in the discussions of the public square.


> I think the key is ensuring that “we” get to choose what society looks like in the AGI era. In the world today, even marginalized people have power.

That's a bold assumption. Much of that assumption is predicated on the ability for the masses to revolt.

> Look what happened to Assad.

Wait for what will come after. Look at all the Arab Spring revolutions, and you see in their wake a number of dictatorships.

Anyhow, I'm not saying this is 100% how it's going to play out, but I definitely wouldn't bet against it. Holding all the keys and having all the resources are the wealthy, and the wealthy have no motivation to voluntarily just give up their position in society. And when humans have no value to leverage/be extracted in order to generate more wealth, their will be no way for the vast majority of people to become wealthy. Raw materials will still be valuable however, but, of course, these are controlled by the wealthy. And if those in power wish to gatekeep access to AGI, they can leverage their wealth and resources to automate a military and thus protect the raw materials that keep them in power.


I wonder how Russian and North Korean citizens would feel about a capitalist, representative democracy?


I think they'd have thing or two to say about living under the rule of wealthy elites. We'd do well to listen to them.


I happen to know a lot of wealthy people who aren’t considered elite, nor have a lick of influence on the state of current affairs.

I don’t think Russians or North Koreans could say the same with a straight face.


They like it. Russians can leave if they want to.


Of course you're right, there's something worse, therefore capitalist, unrepresentative democracy is perfect.

How could I be so naive?


What’s the quote, something like: “democracy and capitalism are horrendous, but they’re better than everything else we tried so far”


People give communism a bad rap, but the soviets had maybe a quarter the resources, a much smaller population and logistical problems from geography and kept up with the US for decades, outpacing in several areas.


It seems to me that given how AI is likely to continuously increase capitalism's efficiency, your argument actually supports the claim you're trying to dispute.


Capitalism is not efficient, it's grabby. Read Bullshit Jobs. Moreover, capitalism isn't interested in efficiency, it's interested in grabbing more stuff. It's relatively effiicient at centralising power and resources into the pockets of shareholders, but that's probably not what you meant.

I think this is borne out even moreso in recent years, as environmental degradation continues, and we watch as capitalist systems are unable to do anything but continue to efficiently funnel money into the pockets of shareholders.

The word "efficient" can only plausibly be applied to overly simplified models in fantastical economic theories which don't reflect reality.

The kind of AI offered by companies like OpenAI may very well be an effective tool at grabbing more stuff though, sure. Or, rather, at convincing everyone they simply must move to this new area, that they control, effectively grabbing that newly created space.


The thing that is killing us is the same thing that is killing capitalism


What has AGI got to do with this?


Part of the ideas pushed into the narrative by Marketing departments / consultants / hyperscalers to movilize growth in the AI ecosystem.


Why? Nobody asked us if we want this. Nobody has a plan what to do with humanity when there is AGI


The plan is to not pay human workers. Never mind what happens to the economy or political landscape.


I am working at an AI company that is not OpenAI. We have found ways to modularize training so we can test on narrower sets before training is "completely done". That said, I am sure there are plenty of ways others are innovating to solve the long training time problem.


Perhaps the real issue is that learning takes time and that there may not be a shortcut. I'll grant you that argument's analogue was complete wank when comparing say the horse and cart to a modern car.

However, we are not comparing cars to horses but computers to a human.

I do want "AI" to work. I am not a luddite. The current efforts that I've tried are not very good. On the surface they offer a lot but very quickly the lustre comes off very quickly.

(1) How often do you find yourself arguing with someone about a "fact"? Your fact may be fiction for someone else.

(2) LLMs cannot reason

A next token guesser does not think. I wish you all the best. Rome was not burned down within a day!

I can sit down with you and discuss ideas about what constitutes truth and cobblers (rubbish/false). I have indicated via parenthesis (brackets in en_GB) another way to describe something and you will probably get that but I doubt that your programme will.


This is literally just the scaling laws, "Scaling laws predict the loss of a target machine learning model by extrapolating from easier-to-train models with fewer parameters or smaller training sets. This provides an efficient way for practitioners and researchers alike to compare pretraining decisions involving optimizers, datasets, and model architectures"

https://arxiv.org/html/2410.11840v1#:~:text=Scaling%20laws%2....


Because of mup [0] and scaling laws, you can test ideas empirically on smaller models, with some confidence they will transfer to the larger model.

[0] https://arxiv.org/abs/2203.03466


O3 is not a smaller model. It's an iterative GPT of sorts with the magic dust of reinforcement learning.


I'm pretty sure that the parent implied that o3 is smaller in comparison to gpt5


>the time it takes it to learn what works/doesn't work widens.

From the raw scaling laws we already knew that a new base model may peter out in this run or the next with some amount of uncertainty--"the intersection point is sensitive to the precise power-law parameters":

https://gwern.net/doc/ai/nn/transformer/gpt/2020-kaplan-figu...

Later graph gpt-3 got to here:

https://gwern.net/doc/ai/nn/transformer/gpt/2020-brown-figur...

https://gwern.net/scaling-hypothesis


Until you get to a point where the LLM is smart enough to look at real world data streams and prune its own training set out of it. At that point it will self improve itself to AGI.


It's like saying bacteria reproduction is way faster than humans so that's where we should be looking for the next breakthroughs.


But if the scaling law holds true, more dollars should at some point translate into AGI, which is priceless. We haven't reached the limits yet of that hypothesis.


> which is priceless

This also isn't true. It'll clearly have a price to run. Even if it's very intelligent, if the price to run it is too high it'll just be a 24/7 intelligent person that few can afford to talk to. No?


Computers will be the size of data centres, they'll be so expensive we'll queue up jobs to run on them days in advance, each taking our turn... history echoes into the future...


Yea, and those statements were true. For a time. If you want to say "AGI will be priceless some unknown time into the future" then i'd be on board lol. But to imply it'll be immediately priceless? As in no cost spent today wouldn't be immediately rewarded once AGI exists? Nonsense.

Maybe if it was _extremely_ intelligent and it's ROI would be all the drugs it would instantly discover or w/e. But lets not imply that General Intelligence requires infinitely knowing.

So at best we're talking about an AI that is likely close to human level intelligence. Which is cool, because we have 7+ billion of those things.

This isn't an argument against it. Just to say that AGI isn't "priceless" in the implementation we'd likely see out of the gate.


a) There is evidence e.g. private data deals that we are starting to hit the limitations of what data is available.

b) There is no evidence that LLMs are the roadmap to AGI.

c) Continued investment hinges on their being a large enough cohort of startups that can leverage LLMs to generate outsized returns. There is no evidence yet this is the case.


> c) Continued investment hinges on their being a large enough cohort of startups that can leverage LLMs to generate outsized returns. There is no evidence yet this is the case.

Why does it have to be startups? And why does it have to be LLMs?

Btw, we might be running out of text data. But there's lots and lots more data you can have (and generate), if you are willing to consider other modalities.

You can also get a bit further with text data by using it for multiple epochs, like we used to do in the past. (But that only really gives you at best an order of magnitude. I read some paper that the returns diminish drastically after four epochs.)


Private data is 90% garbage too


"There is no evidence that LLMs are the roadmap to AGI." - There's plenty of evidence. What do you think the last few years have been all about? Hell, GPT-4 would already have qualified as AGI about a decade ago.


>What do you think the last few years have been all about?

Next token language-based predictors with no more intelligence than brute force GIGO which parrot existing human intelligence captured as text/audio and fed in the form of input data.

4o agrees:

"What you are describing is a language model or next-token predictor that operates solely as a computational system without inherent intelligence or understanding. The phrase captures the essence of generative AI models, like GPT, which rely on statistical and probabilistic methods to predict the next piece of text based on patterns in the data they’ve been trained on"


Everything you said is parroting data you’ve trained on, two thirds of it is actual copy paste


He probably didn't need petabytes of reddit posts and millions of gpu-hours to parrot that though.

I still don't buy the "we do the same as LLMs" discourse. Of course one could hypothesize the human brain language center may have some similarities to LLMs, but the differences in resource usage and how those resources are used to train humans and LLMs are remarkable and may indicate otherwise.


Not text, he had petabytes of video, audio, and other sensory inputs. Heck, a baby sees petabytes of video before first word is spoken

And he probably cant quote Shakespeare as well ;)


>Not text, he had petabytes of video, audio, and other sensory inputs. Heck, a baby sees petabytes of video before first word is spoken

A 2-3 year old baby could speak in a rural village in 1800, having just seen its cradle (for the first month/s), and its parents' hut for some more months, and maybe parts of the village afterwards.

Hardly "petabytes of training video" to write home about.


you are funny. Clearly your expertise with babies comes from reading books about history or science, rather than ever having interacted with one…

What resolution of screen do you think you would need to not distinguish from reality? For me personally i very conservatively estimate it to be on above OOM of 10 4k screens by 10, meaning 100k screens. If a typical 2h 4k is ~50gb uncompressed, that gives us about half a petabyte per 24h (even with eyes closed). Just raw unlabeled vision data.

Probably a baby has a significantly lower resolution, but then again what is the resolution from the skin and other organs?

So yes, petabytes of data within the first days of existence - well, likely before even being born since baby can hear inside the uterus, for example.

And very high signal data, as you’ve stated yourself (nothing to write home about) mainly seeing mom and dad, as well as from a feedback loop POV - a baby never tells you it is hungry subtly.


> he had petabytes of video, audio, and other sensory inputs

He didn't parrot a video or sensory inputs though.


No, they don’t - they don’t have the hardware, yet. But they do parrot sensory output to eg muscles that induce the expected video sensory inputs in response, in a way that mimics the video input of “other people doing things”.


And yet with multiple OoM more data he still didn't cost millions of dollars to be trained nor multiple lifetimes in gpu-hours. He probably didn't even register all the petabytes passing through all his "sensors", those are some characteristics that we are not even near understanding and much less replicating.

Whatever is happening in the brain is more complex as the perf/cost ratio is stupidly better for humans for a lot of tasks in both training and inference*.

*when considering all modalities, o3 can't even do the ARC AGI in vision mode but rather just json representations. So much for omni.


>Everything you said is parroting data you’ve trained on

"Just like" an LLM, yeah sure...

Like how the brain was "just like" a hydraulic system (early industrial era), like a clockwork with gears and differentiation (mechanical engineering), "just like" an electric circuit (Edison's time), "just like" a computer CPU (21st century), and so on...

You're just assuming what you should prove


What do you think "AGI" is supposed to be?


o1 points out this is mostly about “if submarines swim”.

https://chatgpt.com/share/6768c920-4454-8000-bf73-0f86e92996...


This comment isn't false but it's very naive.


You have described something but you haven't explained why the description of the thing defines its capability. This is a tautology, or possibly a begging of the question, which takes as true the premise of something (that token based language predictors cannot be intelligent) and then uses that premise to prove an unproven point (that language models cannot achieve intelligence).

You did nothing at all to demonstrate why you cannot produce an intelligent system from a next token language based predictor.

What GPT says about this is completely irrelevant.


>You did nothing at all to demonstrate why you cannot produce an intelligent system from a next token language based predictor

Sorry, but the burden of proof is on your side...

The intelligence is in the corpus the LLM was fed with. Using statistics to pick from it and re-arrange it gives new intelligent results because the information was already produced by intelligent beings.

If somebody gives you an excerpt of a book, it doesn't mean they have the intelligence of the author - even if you have taught them a mechanical statistical method to give back a section matching a query you make.

Kids learn to speak and understand language at 3-4 years old (among tons of other concepts), and can reason by themselves in a few years with less than 1 billionth the input...

>What GPT says about this is completely irrelevant.

On the contrary, it's using its very real intelligence, about to reach singularity any time now, and this is its verdict!

Why would you say it's irrelevant? That would be as if it merely statistically parroted combinations of its training data unconnected to any reasoning (except of that the human creators of the data used to create them) or objective reality...


Let's pretend it is 1940

Person 1: rockets could be a method of putting things into Earth orbit

Person 2: rockets cannot get things into orbit because they use a chemical reaction which causes an equal and opposite force reaction to produce thrust'

Does person 1 have the burden of proof that rockets can be used to put things in orbit? Sure, but that doesn't make the reasoning used by person 2 valid to explain why person 1 is wrong.

BTW thanks for adding an entire chapter to your comment in edit so it looks like I am ignoring most of it. What I replied to was one sentence that said 'the burden of proof is on you'. Though it really doesn't make much difference because you are doing the same thing but more verbose this time.

None of the things you mentioned preclude intelligence. You are telling us again how it operates but not why that operation is restrictive in producing an intelligent output. There is no law that saws that intelligence requires anything but a large amount of data and computation. If you can show why these things are not sufficient, I am eager to read about it. A logical explanation would be great, step by step please, without making any grand unproven assumptions.

In response to the person below... again, whether or not person 1 is right or wrong does not make person 2's argument valid.


It's not like we discovered hot air ballons, and some people think we'll get to Moon and Mars with them...

> Does person 1 have the burden of proof that rockets can be used to put things in orbit? Sure, but that doesn't make the reasoning used by person 2 valid to explain why person 1 is wrong.

The reasoning by person 2 doesn't matter as much if 1 is making an ubsubstantiated claim to begin with.

>There is no law that saws that intelligence requires anything but a large amount of data and computation. If you can show why these things are not sufficient, I am eager to read about it.

Errors with very simple stuff while getting higher order stuff correct shows that this is not actual intelligence matching the level of performance exhibited, i.e. no understanding.

No person who can solve higher level math (like an LLM answering college or math olympiad questions) is confused by the kind of simple math blind spots that confuse LLMs.

A person understanding higher level math, would never (and even less so, consistently) fail a problem like:

"Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?"

https://arxiv.org/pdf/2410.05229

(of course with these problems exposed, they'll probably "learn" to overfit it)


> The reasoning by person 2 doesn't matter as much if 1 is making an ubsubstantiated claim to begin with.

But it doesn't make person 2's argument valid.

Everyone here is looking at the argument by person 1 and saying 'I don't agree with that, so person 2 is right!'.

That isn't how it works... person 2 has to either shut up and let person 1 be wrong in a way that is wrong, but not for the reasons they think, or they need to examine their assumptions and come up with a different reason.

No one is helped by turning critical thinking into team sports where the only thing that matters is that your side wins.


The delta-V for orbit is a precisely defined point. How you get there is not.

What is the defined point for reaching AGI?


I can check but I am pretty sure that using a different argument to try and prove something is wrong will not make another person's invalid argument correct.


Person 3: Since we can leave earths orbit, we can reach faster than light speed, look at this graph over our progress making faster rockets we will for sure reach there in a few years!


So there is a theoretical framework which can be tested against to achieve AGI and according to that framework it is either not possible or extremely unlikely because of physical laws?

Can you share that? It sounds groundbreaking!


The people who claim we'll have sentient AI soon are the ones making the extraordinary claims. Let them furnish the extraordinary evidence.


So, I think people in this thread, including me, have been talking past each other a bit. I do not claim that sentient AI will emerge. I am arguing that the person who is saying that it can't happen for a specific reason is not considering that the reason they are stating implicitly is that nothing can be greater than the sum of its parts.

Describing how an LLM operates and how it was trained does not preclude the LLM from ever being intelligent, and it almost certainly will not become intelligent, but you cannot say that it didn't for the reasons the person I am arguing with is saying, which is that intelligence can not come from something that works statistically on a large corpus of data written by people.

A thing can be more than the sum of its parts. You can take the English alphabet, which is 26 letters, and arrange those letters along with some punctuation to make an original novel. If you don't agree that means that you can get something greater than what defines it components, then you would have to agree that there are no original novels because they are composed of letters which were already defined.

So in that way, the model is not unable to think because it is composed of thoughts already written. That is not the limiting factor.


> If somebody gives you an excerpt of a book, it doesn't mean they have the intelligence of the author

A closely related rant of my own: The fictional character we humans infer from text is not the author-machine generating that text, not even if they happen to share the same name. Assuming that the author-machine is already conscious and choosing to insert itself is begging the question.


Have you ever heard of a local maxima? You don't get an attack helicopter by breeding stronger and stronger falcons.


For an industry that spun off of a research field that basically revolves around recursive descent in one form or another, there's a pretty silly amount of willful ignorance about the basic principles of how learning and progress happens.

The default assumption should be that this is a local maximum, with evidence required to demonstrate that it's not. But the hype artists want us all to take the inevitability of LLMs for granted—"See the slope? Slopes lead up! All we have to do is climb the slope and we'll get to the moon! If you can't see that you're obviously stupid or have your head in the sand!"


You’re implicitly assuming only a global maximum will lead to useful AI.

There might be many local maxima that cross the useful AI or even AGI threshold.


And we aren't even at a local maximum. There's still plenty of incremental upwards progress to be made.


I never said anything about usefulness, and it's frustrating that every time I criticize AGI hype people move the goalposts and say "but it'll still be useful!"

I use GitHub Copilot every day. We already have useful "AI". That doesn't mean that the whole thing isn't super overhyped.


So far we haven't even climbed this slope to the top yet. Why don't we start there and see if it's high enough or not first? If it's not, at the very least we can see what's on the other side, and pick the next slope to climb.

Or we can just stay here and do nothing.


No, GPT-4 would have been classified as it is today: a (good) generator of natural language. While this is a hard classical NLP task, it's a far cry from intelligence.


GPT-4 is a good generator of natural language in the same sense that Google is a good generator of ip packets.


> GPT-4 would already have qualified as AGI about a decade ago.

Did you just make that up?


A lot of people held that passing the Turing Test would indicate human-level intelligence. GPT-4 passes.


Link to GPT-4 passing the turing test? Tried googling, could not find anything.


Google must be really going downhill. DDG “gpt turing test” provides nothing but relevant links. Here’s a paper: https://arxiv.org/pdf/2405.08007


Probably asked an "AI"


The last four years?

ELIZA 2.0


I agree, these are good points.


Have we really hit the wall?

Do they use GPS based data?

Feels like there’s data all around us.

Sure they’ve hit the wall with obvious conversations and blog articles that humans produced, but data is a by product of our environment. Surely there’s more. Tons more.


We also could just measure the background noise of the universe and produce unlimited data.

But just like GPS data it isn't suited for LLMs given that you know it has no relevance what so ever to language.


Ignoring the confusion about 'GPS' for a moment: there's lots and lots of other data that could be used for training AI systems.

But, you need to go multi-modal for that; and you need to find data that's somewhat useful, not just random fluctuations like the CMB. So eg you could use YouTube videos, or even just point webcams at the real world. That might be able to give your AI a grounding in everyday physics?

There's also lots of program code you can train your AI on. Not so much the code itself, because compared to the world's total text (that we are running out of), the world's total human written code is relatively small.

But you can generate new code and make it useful for training, by also having the AI predict what happens when you (compile and) run the code. A bit like self-playing for improving AlphaGo.


You’re thinking of language in the strictest of sense.

GPS data as it relates to location names, people, cultures, path finding.


What does culture and names and people have to do with the Global Position System?

You are right that we can have lots more data, if you are willing to consider other modalities. But that's not 'GPS'. Unless you are using an idiosyncratic definition of GPS?


"Orion’s problems signaled to some at OpenAI that the more-is-more strategy, which had driven much of its earlier success, was running out of steam."

So LLMs finally hit the wall. For a long time, more data, bigger models, and more compute to drive them worked. But that's apparently not enough any more.

Now someone has to have a new idea. There's plenty of money available if someone has one.

The current level of LLM would be far more useful if someone could get a conservative confidence metric out of the internals of the model. This technology desperately needs to output "Don't know" or "Not sure about this, but ..." when appropriate.