Hacker News new | past | comments | ask | show | jobs | submit | diablozzq's comments login

At one point the goal posts was the Turing test. That’s long since been passed, and we aren’t satisfied.

Then goal posts were moved to logical reasoning such as the Winograd Schemas. Then that wasn’t enough.

In fact, it’s abundantly clear we won’t be satisfied until we’ve completely destroyed human intelligence as superior.

The current goal post is LLMs must do everything better than humans or it’s not AGI. If there is one thing it does worse, people will cite it as just a stochastic parrot. That’s a complete fallacy.

Of course we dare not compare LLMs to the worse case human - because LLMs would be AGI compared to that.

We compare LLMs to the best human in every category - unfairly.

With LLMs it’s been abundantly clear - there is not a line where something is intelligent or not. There’s only shades of gray and eventually we call it black.

There will always be differences between LLM capabilities and humans - different architectures and different training. However it’s very clear that a process that takes huge amounts of data and processes it whether a brain or LLM come up with similar results.

Someone should up with a definition of intelligence that excludes all LLMs and includes all humans.

Also while you are at it, disprove humans do more than what ChatGPT does - aka probabilistic word generation.

I’ll wait.

Until then, as ChatGPT blows past what was science fiction 5 years ago, maybe these arguments aren’t great?

Also - name one thing we have the data for that we haven’t been able to produce a neural network capable of performing that task?

Human bodies have so many sensors it’s mind blowing. The data any human processes in one days simply blows LLMs out of the water.

Touch, taste, smell, hearing, etc…

That’s not to say if you could hook up a hypothetical neural network to a human body, that we couldn’t do the same.


> In fact, it’s abundantly clear we won’t be satisfied until we’ve completely destroyed human intelligence as superior.

One could argue this is precisely where the goal posts have been for a long time. When did the term "singularity" start being used in the context of human technological advancements?


This describes humans in a nutshell. This is why people vary wildly in ability...

But we consider humans intelligent.


No, no it doesn't. Not even slightly. If you fed a human child only text from the internet you'd not produce a competent adult (by competent I mean able to feed themselves etc).

That some tech people think that human intelligence can be reduced to such "textural mechanics" betrays a lack of depth of understanding and even appreciation of the deep and complex world within which we find ourselves. Our written corpus is but a particular reflection of this reality - the shadows on the wall in Plato's cave if you will.


Are you making the argument that children do not come with an innate ability to read text and thus cannot learn from it, or are you making the argument that the internet does not contain, among other things, fairly detailed instructions on various ways to feed oneself?


I'm saying that without the context of experiencing the underlying reality, text of itself is meaningless. What is a 'spoon' anyway??


So you're saying one needs some sort of reference for which words refer to which real-world sensory experiences such as "the thing that looks like this is a spoon", and text models do not have sensory associations?


I haven’t experienced cocaine. I haven’t ever seen cocaine (IRL). Nevertheless I think I have a decent grasp of what it is, how it works, how it affects people, and what I could (mis)use it for. Would you imply that my knowledge of cocaine isn’t true/real/useful? Is snorting a line the only way to reify the knowledge? (The answer is: the indirect way of obtaining information is sufficient for building a correct/useful/accurate world model, and there is no such thing as direct experience anyway - it’s signals coming down the wire all the way down.)


I think it's straightforwardly true that you cannot understand what it feels like to be under the influence of cocaine in the same way as someone that has been.


Isn't "meaning" just how a word is used within a specific context? I think that was one of Wittgenstein's points. The word "dog" doesn't need to reference the underlying reality to have meaning. Its meaning emerges from its usage in relation to all other words. Language isn't a mirror of the external world, which is probably one of the reasons LLMs are so successful.


Feed themselves? You mean pick up a burger from the local McDonald's? There's got to be a better Turing test.


>This describes humans in a nutshell. This is why people vary wildly in ability...

Can you expand on that a little?


More than 80% of college students get the following question wrong.[1] I'd say this is an example of naive pattern matching with no real reasoning.

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

a) Linda is a bank teller. b) Linda is a bank teller and is active in the feminist movement.

[1] https://en.wikipedia.org/wiki/Conjunction_fallacy


Hank is 31 years old, single, outspoken, and very bright. Hank got a technical degree, and as a student was deeply interested in machine learning and the ethics & philosophy of potential artificial intelligence. He posts a problem on HN where he describes the personality, capabilities, interests, and background of a woman named Linda has and then asks people to reflect on her occupation, politics, and an intersection between the two. Which is more likely?

a) Hank has written us a human interest problem b) Hank has written us a human interest and a probability problem.

I don't think people are simply wrong about the Linda problem, I think they're imprecise about which question they're answering, and more or less think they're answering a question about what chances that Linda is a feminist vs what are the chances she's a bank teller not only using the givens+relevant priors about people but also their priors about what kind of question they're answering. It isn't "no real reasoning", it's just not high resolution enough to be technically correct by the standards of a constructed probability problem.

You can argue LLMs are also not quite high resolution enough and I'd accept that. In my mind the question is what it would take to get some kind of ML software to a place where if you trained it on enough probability problems it would be able to evaluate the Hank problem above, including the issue of whether (a) and (b) are actually independent. ;)


I'd say this is the very definition of "overthinking it" instead of pattern matching. Pattern matching would lead you straight to the correct answer - joint probability is always ≤ single probability. Meaning the information given in the question is just fluff. Pattern matching reduces to "Which is more probable: just A or else A ^ B?" at which point the correct answer becomes obvious.


Eh, the "correct" answer seems silly and overly mathematical to me.

The fact is, you can infer things about people based on things you know about people. I can pick a random user on HN, and knowing they're a user of HN, I can say it's probable that they work in technology. We don't need to bring statistics into it and turn it into a math problem.


> I'd say this is an example of naive pattern matching with no real reasoning.

It is an example of lack of mathematical (really probability) know how, not naïve pattern matching.


I'd say it's an example of misphrased questions that people will intuitively make sense of, and thus produce strictly incorrect answer.

Obviously, people rephrase the question into : is there more chance that she's a feminist or that she isn't ?


Well, it's multiple-choice. So the "a" answer excludes the "b" answer by convention, and thus implies she isn't a feminist. Rephrase the question to remove that implication and I expect far more people will pick the "a" answer.


This particular question doesn't seem like a fallacy to me.

I, like most people intuitively answered b). Given the explanation on the Wikipedia page I went "oh, of course, yeah", but then I thought about why I'd answer b) given that I'm fairly familiar with basic probability.

If you give me two options and ask me to pick between them, my brain is usually going to assume it's not a trivially true problem.

Language needs context for any sense to be made of it.

As a result of the above, reading the question, the intuitive reading makes the answer choices

a) Linda is a bank teller (implicitly, a bank teller NOT active in the feminist movement)

b) Linda is a bank teller and is active in the feminist movement.

This question is one of language, context, and interpretation, not of people failing to understand basic probability.

I suspect that if you prime people to excise interpretation of the question by presenting it as the following, the majority of people would guess correctly:

---

"Consider the following two statements:

1) Linda is a Bank Teller

2) Linda is active in the feminist movement

Which is more likely?

a) 1

b) 1 ^ 2"


Multiple choice questions imply that other choices are excluded (unless an answer such as "all of the above" is a choice. So the implied question is:

a) Linda is a bank teller (and NOT active in the feminist movement) b) Linda is a bank teller and is active in the feminist movement

What you want it to be asking here is:

a) Linda is a bank teller, and may or may not be active in the feminist movement b) Linda is a bank teller, and is active in the feminist movement

Most college students have taken quite a few multiple-choice tests (particularly in the US, high-schools train for standardized multiple-choice tests). The question isn't asking what the mathematicians seem to think it's asking, because it's format conveys extra restrictions.


Phrasing this in terms of probability seems very weird in the first place.


https://www.cdc.gov/coronavirus/2019-ncov/covid-data/investi...

They are using race to preferentially treat higher risk groups of people.

Sounds like the way you should treat people - higher risk first. Very click bait title.


Men are more likely to die of COVID and more likely to be intubated. Should we start preferentially treating men over women? Why did they not include this advice as well?


The article goes into more detail, asserting that they are using race as a proxy for likelihood of being vaccinated.


Government shouldn't be making decisions based on race, period, full stop. Anything relevant to health should be decided by doctors, outside the purview of politicians and bureacrats.


  Location: Omaha, NE
  Remote: Yes
  Willing to relocate: For the right offer, but prefer no
  Technologies: Python, Powershell, Javascript, C, C++, NodeJS, React, Java, C#, Ghidra, IDA Pro, Splunk, Windows Server, Kali Linux, Redhat Linux, SQL
  Résumé/CV: https://docs.google.com/document/d/1Jo9ZSbBmsr2EuZ3N7jA5_I0gQT_v1FdN5jVD7gyg9wM/edit?usp=sharing
  Email: morlandkc (at) gmail (dot) com

Looking for a software engineering role. Ideally, with a security (reverse engineering, pentesting, etc.. focus)

I'm a OSCP / CISSP holder currently obtaining a Masters in Computer Science at Georgia Tech. Working on developing Ghidra (java) reverse engineering plugins for malware analysis.

My career has mostly been automation / security in Infrastructure. But I've developed ReactJS websites w/ postgres, and done about everything you can do on a computer, from assembly, C, to networking and infrastructure.

I want hard, challenging problems to solve.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: