Also a copy-paste of https://srconstantin.wordpress.com/2019/02/25/humans-who-are...
The link should be changed.
It doesn't matter what I say
So long as I sing with inflection
That makes you feel I'll convey
Some inner truth or vast reflection
But I've said nothing so far
And I can keep it up for as long as it takes
And it don't matter who you are
If I'm doing my job then it's your resolve that breaks
Is it just me or does this come off as incredibly rude? It’s one thing to say people are bad at math but the phrasing of this seems insulting without reason, in my opinion.
In order for an intelligence to be called general it would be necessary to be effective in all situations. Humans only perceive the world through our five limited senses and then filters it through the coloured glass of our concepts. We can't escape the limitations of our senses and mental models taken together.
Humans can't even keep in the working memory more than 7 objects at once, some people can handle a few more but not on the order of hundreds or thousands. What if the real ultimate theory of physics required a working memory of 1000 objects? We'd be forever blocked from grasping it like an ant vs. the stock-market. Programmers live at the edge of this grasping power and know the horror of not being able to take it all in at once, and it's so easy to get into such a situation.
It is possible that there is no general intelligence anywhere. It's always an intelligence of a specific environment, solving specific types of problems. A general intelligence would need a much more varied and challenging environment in order to reach that level of intelligence.
The more complex the environment, the higher the intelligence of its agents. So there is always going to be an upper limit to intelligence, and the environment has a lot to do with it. No intelligence is truly general.
So, basically, the author made a contrived definition of their own, and hand-waved about how humans don't meet it...
This seems like a weak argument unless you are restricting humans to have no tools (in particular some kind of pen/paper or other storage mechanism).
Even a 3 register machine is turing complete. Any reasonable formalization of the human brain's computation with 7 objects in working memory can simulate a human brain with N objects in working memory if we have access to external storage.
How to do this, and whether it's effective and worth doing, that is a separate question.
It may certainly be the case that we will never truly "grok" the mysteries of the universe in the way that one may grok an elegant algorithm. But we can discover the mysteries and convince ourselves they are true, even if perhaps in a slow and roundabout way. For an example of what this may look like, take a gander at how we proved the four-color theorem using computers https://en.wikipedia.org/wiki/Four_color_theorem#Proof_by_co... :
> Using mathematical rules and procedures based on properties of reducible configurations, Appel and Haken found an unavoidable set of reducible configurations, thus proving that a minimal counterexample to the four-color conjecture could not exist. Their proof reduced the infinitude of possible maps to 1,936 reducible configurations (later reduced to 1,476) which had to be checked one by one by computer and took over a thousand hours
It's not as beautiful as other mathematical theorems, but any human with enough pen and paper and time could write or verify this proof themselves. After all, a computer did so, and we can step by step simulate a turing machine so that means we can do so too.
It's not just for fun, you can get a good sense of the algorithm. One of the things it is somewhat prone to is some weird looping, like this: https://www.reddit.com/r/SubSimulatorGPT2/comments/d1nwdg/if... in which the algorithm generates the sentence "Toss some leeches around and wait 'til we get there." (no, it does not make any more sense in context), and then repeats that sentence nearly (but not quite!) exactly 23 more times. (I expect this is a consequence of the way it is tracking some internal state; I assume these sentences are strange attractors in some sort of state that is getting iteratively modified.)
You can also see that while it picks up some deep structure, a check of anything trained on /r/jokes (https://www.reddit.com/r/SubSimulatorGPT2/comments/d055mt/a_... ) or /r/math (https://www.reddit.com/r/SubSimulatorGPT2/comments/d1yz1e/ho... ) the algorithm is definitely unable to deal with deeper structure right now. The /r/jokes bot is humorous in its complete lack of humor, I mean, well beyond any sarcastic snark about how unfunny /r/jokes may be. It has the structure of jokes. There was one recent one that even asked "What's a pirate's favorite letter?", and the bot had noticed the answer was being given in the form of letters, but I don't think a single instance of the bot proposed "r". But it does not understand humor in the slightest. Of the several dozen attempts at jokes I've at least skimmed, I believe it only achieved something that was at least recognizable as an attempt at humor once, and it still wasn't that funny. Likewise math. It's got a good idea there's these "prime number" things and they're pretty important, but I've seen at least half-a-dozen wrong definitions of what one is.
It's a very interesting algorithm. It's a great babbler. But on its own, it's not a great solution to generating text. Although it may very well be able to generate text that can pass a casual skim text, as the article suggests. Still, it takes human curation to get that far. Any human that can read is going to guess something's inhuman about repeating "Toss some leeches around and wait 'til we get there." 24 times in a row.
Dude, my malady got worse, not better. The comments there are up to a better standard than most Youtube or Facebook comments.
To me this actually sounds hilarious, if it is as you describe it. It's the kind of joke you'd find on The Office:
Dwight: "What's a pirate's favourite letter?"
> "Toss some leeches around and wait 'til we get there."
Like the above poster said, I've seen this style of spam-y context-free meme on reddit before too.
Well, here's the original I was referring to: https://www.reddit.com/r/SubSimulatorGPT2/comments/c8klvj/wh...
I was wrong. R was suggested in some of the replies. But the original answer is given as "The M", and contains gems like '"r" is a misspelled letter, and "m" is a misspelled letter. "M" is a misspellable word you can't use as a word in the English language.'
"Like the above poster said, I've seen this style of spam-y context-free meme on reddit before too. "
That would make sense. I guess I just don't frequent those corners. GPT-2 is clearly capable of picking up on structure, so if it sees something repeated it doesn't just notice "This particular thing is repeated a lot", it picks up some concept of repetition itself. A number of the bots have picked up the concept of quoting the message they're replying to. (In the meta reddit for this, the creator has said the posts and the replies are trained as separate corpora, so the replies "know" they are replies. I gather there is also enough markers that the bots can distinguish between title, post text, and subsequent replies.)
It could be able to generate all our news articles in whatever style we prefer.
Bye bye freelancer writers.