Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Humans Who Are Not Concentrating Are Not General Intelligences (skynettoday.com)
76 points by andreyk 11 days ago | hide | past | web | favorite | 31 comments






For some context, as noted at top of article we re-posted it with permission since the release of the larger GPT2 model led to a new slate of fearful articles on the topic and this has remained one of the best takes on it. Was not aware it was already upvoted a bunch on HN before though, cool to see.

The guideline on HN is to use original link whenever possible.

The link should be changed.


ah, whoops, was not aware. Seems I can't change the link, maybe mods can or can just delete.

Makes me think of "Hook" by Blues Traveler:

  It doesn't matter what I say
  So long as I sing with inflection
  That makes you feel I'll convey
  Some inner truth or vast reflection
  But I've said nothing so far
  And I can keep it up for as long as it takes
  And it don't matter who you are
  If I'm doing my job then it's your resolve that breaks

Now if GPT-2 could write yet another beginner Python lists and tuples blog post, I prolly wouldn’t notice. If it could write a description which helped me to get my head around my client’s Swagger API I would be thrilled. No one really has the time nor patience to explain it to me in a way that clicks.

> “I’ve taught public school teachers, who were incredibly bad at formal mathematical reasoning (I know, because I graded their tests), to the point that I had not realized humans could be that bad at math”

Is it just me or does this come off as incredibly rude? It’s one thing to say people are bad at math but the phrasing of this seems insulting without reason, in my opinion.


It just reeks of the same lazy garbage of that whole "separating the programming goats from the sheep" meme that went around a few years back. "It's not that my pedagogy has faults, people are just inherently bad and there's no point trying to teach them"

She doesn't claim that she couldn't teach them. She claimed that they were bad at formal reasoning.

I mean, she didn't name names. I don't think there's anything wrong with being blunt if it explicitly directed at individuals.

I've certainly noticed that most people speak on auto-pilot, basically just repeating what they've heard or read. I wonder if just a few % of people are actually generally intelligent and everyone else is following along. Or is it just a random process, in which we are just repeating each other and making mistakes leads to new ideas.

It's an energy problem. Take the average mental energy level and divide it up among all the facets of daily life. Most of that is probably going to go into working, eating, relaxing. Developing original opinions in every single field isn't realistic, so we'll find someone with authority and repeat what they said.

I am thinking about using GPT-2 or better to write homeworks for my MBA... Nobody would spot a difference (I am worried/relieved).

Do it

Can someone explain the title? It doesn't make sense to me.

I think an even stronger version applies. Humans are not general intelligences. We are just good at keeping ourselves alive and making more of us on this planet (and in this ecosystem). All the rest - language, science, culture, economy - are just our current solution to solving the constraints of life.

In order for an intelligence to be called general it would be necessary to be effective in all situations. Humans only perceive the world through our five limited senses and then filters it through the coloured glass of our concepts. We can't escape the limitations of our senses and mental models taken together.

Humans can't even keep in the working memory more than 7 objects at once, some people can handle a few more but not on the order of hundreds or thousands. What if the real ultimate theory of physics required a working memory of 1000 objects? We'd be forever blocked from grasping it like an ant vs. the stock-market. Programmers live at the edge of this grasping power and know the horror of not being able to take it all in at once, and it's so easy to get into such a situation.

It is possible that there is no general intelligence anywhere. It's always an intelligence of a specific environment, solving specific types of problems. A general intelligence would need a much more varied and challenging environment in order to reach that level of intelligence.

The more complex the environment, the higher the intelligence of its agents. So there is always going to be an upper limit to intelligence, and the environment has a lot to do with it. No intelligence is truly general.


>In order for an intelligence to be called general it would be necessary to be effective in all situations.

So, basically, the author made a contrived definition of their own, and hand-waved about how humans don't meet it...


> Humans can't even keep in the working memory more than 7 objects at once, some people can handle a few more but not on the order of hundreds or thousands. What if the real ultimate theory of physics required a working memory of 1000 objects?

This seems like a weak argument unless you are restricting humans to have no tools (in particular some kind of pen/paper or other storage mechanism).

Even a 3 register machine is turing complete. Any reasonable formalization of the human brain's computation with 7 objects in working memory can simulate a human brain with N objects in working memory if we have access to external storage.

How to do this, and whether it's effective and worth doing, that is a separate question.

It may certainly be the case that we will never truly "grok" the mysteries of the universe in the way that one may grok an elegant algorithm. But we can discover the mysteries and convince ourselves they are true, even if perhaps in a slow and roundabout way. For an example of what this may look like, take a gander at how we proved the four-color theorem using computers https://en.wikipedia.org/wiki/Four_color_theorem#Proof_by_co... :

> Using mathematical rules and procedures based on properties of reducible configurations, Appel and Haken found an unavoidable set of reducible configurations, thus proving that a minimal counterexample to the four-color conjecture could not exist. Their proof reduced the infinitude of possible maps to 1,936 reducible configurations (later reduced to 1,476) which had to be checked one by one by computer and took over a thousand hours

It's not as beautiful as other mathematical theorems, but any human with enough pen and paper and time could write or verify this proof themselves. After all, a computer did so, and we can step by step simulate a turing machine so that means we can do so too.


A true general intelligence would need to handle any environment. The only such intelligence would be universal intelligence which has experienced every possible environment. Unfortunately this kind of intelligence is cheating because it's not really an agent and since it encapsulates everything there are no problems for it to solve.

Reminds me of one of Isaac Arthur's recent videos, "The Paperclip Maximizer": https://youtu.be/3mk7NVFz_88

No realalizable intelligence perhaps, but AIXI is pretty bloody general and pretty damn smart.

If you'd like to innoculate yourself, and have a bit of fun in the meantime, consider reading https://www.reddit.com/r/SubSimulatorGPT2/ .

It's not just for fun, you can get a good sense of the algorithm. One of the things it is somewhat prone to is some weird looping, like this: https://www.reddit.com/r/SubSimulatorGPT2/comments/d1nwdg/if... in which the algorithm generates the sentence "Toss some leeches around and wait 'til we get there." (no, it does not make any more sense in context), and then repeats that sentence nearly (but not quite!) exactly 23 more times. (I expect this is a consequence of the way it is tracking some internal state; I assume these sentences are strange attractors in some sort of state that is getting iteratively modified.)

You can also see that while it picks up some deep structure, a check of anything trained on /r/jokes (https://www.reddit.com/r/SubSimulatorGPT2/comments/d055mt/a_... ) or /r/math (https://www.reddit.com/r/SubSimulatorGPT2/comments/d1yz1e/ho... ) the algorithm is definitely unable to deal with deeper structure right now. The /r/jokes bot is humorous in its complete lack of humor, I mean, well beyond any sarcastic snark about how unfunny /r/jokes may be. It has the structure of jokes. There was one recent one that even asked "What's a pirate's favorite letter?", and the bot had noticed the answer was being given in the form of letters, but I don't think a single instance of the bot proposed "r". But it does not understand humor in the slightest. Of the several dozen attempts at jokes I've at least skimmed, I believe it only achieved something that was at least recognizable as an attempt at humor once, and it still wasn't that funny. Likewise math. It's got a good idea there's these "prime number" things and they're pretty important, but I've seen at least half-a-dozen wrong definitions of what one is.

It's a very interesting algorithm. It's a great babbler. But on its own, it's not a great solution to generating text. Although it may very well be able to generate text that can pass a casual skim text, as the article suggests. Still, it takes human curation to get that far. Any human that can read is going to guess something's inhuman about repeating "Toss some leeches around and wait 'til we get there." 24 times in a row.


> If you'd like to innoculate yourself,

Dude, my malady got worse, not better. The comments there are up to a better standard than most Youtube or Facebook comments.


Reading that sub was a surreal experience. The most interesting part for me was how it (completely subconsciously) changed the "voice in my head" reading voice into something very dull and stilted, like a second grader reading a terrible essay out loud.

> There was one recent one that even asked “What’s a pirate’s favorite letter?”, and the bot had noticed the answer was being given in the form of letters, but I don’t think a single instance of the bot proposed “r”. But it does not understand humor in the /slightest/.

To me this actually sounds hilarious, if it is as you describe it. It's the kind of joke you'd find on The Office:

Dwight: "What's a pirate's favourite letter?" Jim: "M" Jim: "N" Jim: "O" Jim: "P" Jim: "Q" Jim: "..." Jim: "S"

> "Toss some leeches around and wait 'til we get there." Like the above poster said, I've seen this style of spam-y context-free meme on reddit before too.


"To me this actually sounds hilarious, if it is as you describe it."

Well, here's the original I was referring to: https://www.reddit.com/r/SubSimulatorGPT2/comments/c8klvj/wh...

I was wrong. R was suggested in some of the replies. But the original answer is given as "The M", and contains gems like '"r" is a misspelled letter, and "m" is a misspelled letter. "M" is a misspellable word you can't use as a word in the English language.'

"Like the above poster said, I've seen this style of spam-y context-free meme on reddit before too. "

That would make sense. I guess I just don't frequent those corners. GPT-2 is clearly capable of picking up on structure, so if it sees something repeated it doesn't just notice "This particular thing is repeated a lot", it picks up some concept of repetition itself. A number of the bots have picked up the concept of quoting the message they're replying to. (In the meta reddit for this, the creator has said the posts and the replies are trained as separate corpora, so the replies "know" they are replies. I gather there is also enough markers that the bots can distinguish between title, post text, and subsequent replies.)


On the other hand, there are plenty of Reddit comment threads written by (presumably) people which are just the same thing over and over again, or small variations thereof. Usually some sort of reference to a joke or meme but it does happen.

I legitimately haven't laughed this hard in years. Thank you for introducing me to this sub.

So all of these comments are made by GPT2?

I’ve definitely had the experience of reading a human written paper, and just skimming it because it didn’t really seem to have a point. Then I sighed and decided to really give it my attention and quickly realized that the reason it was hard to read was because it was chock full of lazy thinking, bad analogies, unexamined assumptions, and non sequiturs to begin with.

Now, make an hybrid system merging this text generator with the classic symbolic AI called CYC from Doug Lenat.

It could be able to generate all our news articles in whatever style we prefer.

Bye bye freelancer writers.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: