Hacker News new | past | comments | ask | show | jobs | submit login

> The really fun thing is that they are reasonably sure that GPT-4 can’t do any of those things and that there’s nothing to worry about, silly.

The best part is that even if we get a Skynet scenario, we'll probably have a huge number of humans and media that say that Skynet is just a conspiracy theory, even as the nukes wipe out the major cities. The Experts™ said so. You have to trust the Science™.

If Skynet is really smart, it will generate media exploiting this blind obedience to authority that a huge number of humans have.




> If Skynet is really smart, it will generate media exploiting this blind obedience to authority that a huge number of humans have.

I’m far from sure that this is not already happening.


Haha, this is near the best explanation I can think of for the "this is not intelligent, it's just completing text strings, nothing to see here" people.

I've been playing with GPT-4 for days, and it is mind blowing how well it can solve diverse problems that are way outside it's training set. It can reason correctly about hard problems with very little information. I've used to to plan detailed trip itineraries, suggest brilliant geometric packing solutions for small spaces/vehicles, etc. It's come up with totally new suggestions for addressing climate change that I can't find any evidence of elsewhere.

This is a non-human/alien intelligence in the realm of human ability, with super-human abilities in many areas. Nothing like this has ever happened, it is fascinating and it's unclear what might happen next. I don't think people are even remotely realizing the magnitude of this. It will change the world in big ways that are impossible to predict.


I used to be in the camp of "GPT-2 / GPT-3 is a glorified Markov chain". But over the last few months, I flipped 180° - I think we may have accidentally cracked a core part of "generalized intelligence" problem. It's not about the language, as much about associations - it seems to me that, once the latent space gets high-dimensional enough, a lot of problems reduce to adjacency search.

I'm starting to get a (sure, uneducated) feeling that this high-dimensional association encoding and search is fundamental to thinking, in a similar way to how a conditional and a loop is fundamental to (Turing-complete) computing.

Now, the next obvious step is of course to add conditionals and loops (and lots of external memory) to a proto-thinking LLM model, because what could possibly go wrong. In fact, those plugins are one of many attempts to do just that.


I completely agree. I have noticed this over the last few years in trying to understand how my own creative thinking seems to work. It seems to me that human creative problem solving involves embedding or compressing concepts into a spatial representation so we can draw high level analogies. A location search then brings creative new ideas translated from analogous situations. I can directly observe this happening in my own mind. These language models seem to do the same.


> It can reason correctly about hard problems with very little information.

i am so tired of seeing people who should know better think that this program can reason.

(in before the 400th time some programmer tells me "well aren't you just an autocomplete" as if they know anything about the human brain)


>(in before the 400th time some programmer tells me "well aren't you just an autocomplete" as if they know anything about the human brain)

Do you know any more about ChatGPT internals than those programmers know about the human brain?

Sure, I believe you can write down the equations for what is going on in each layer, but knowing how each activation is calculated from the previous layer tells you very little about what hundreds of billions of connections can achieve.


> Do you know any more about ChatGPT internals than those programmers know about the human brain?

Yes, especially every time I have to explain what an LLM is or anytime I see a comment about how ChatGPT "reasoned" or "knew" or "understood" something when that clearly isn't how it works by OpenAI's own admission.

But even if that wasn't the case especially yes do I understand some random ML project more than programmers know about what constitutes human!


Honestly, I don’t see how anyone really paying attention can draw this conclusion. Take a look at the kinds of questions on the benchmarks and the AP exams. Independent reasoning is the key thing these tests try to measure. College entrance exams are not about memorization and regurgitation. GTP-4 scores a 1400 on the SAT.


No shit, a good quarter of the internet is SAT prep. Where do you think GPT got it's dataset?


I have a deprecated function and ask ChatGPT what I should use instead, ChatGPT responds by inventing a random non existent function. I tell it that the function doesn't exist, it tries again with another non existent function.

Oddly speaking that sounds like a very simple language level failure, i.e. the tool generates text that matches the shape of the answer but not its details. I am not far enough into this ChatGPT religion to gaslight myself over outright lies like Elon Musk fanboys seem to enjoy doing.


Who's to say we're not already there?

dons tinfoil hat




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: