I always find it fascinating things like subroutines that are obvious to us now had to be invented by someone. You can easily imagine an alternate history where computing would look totally differently.
Lots a things are invented multiple times. It's more common that one might presume. I'm generally wary of claim that someone was first to invent fairly obvious things.
In physics people will eventually find stuff because they are law of nature. There is no law of nature that subroutines should exist so it’s very possible they would never be invented and people would do things in a different way.
> I always find it fascinating things like subroutines that are obvious to us now had to be invented by someone.
This is just the anthropic principle. The fact that somebody was first doesn't mean something wasn't obvious.
I like to complain about De Morgan's Laws, which state that:
(1) If neither of two things is true, then they're both false. (And vice versa.)
(2) If two things aren't both true, then one or both of them are false. (And vice versa.)
Augustus De Morgan lived in the 1800s, but obviously these laws were well known for many millennia before him. Having these laws named after you is the honorable-mention-in-the-Special-Olympics of making discoveries. It's an insult to the concept of naming things after people.
I once asked a mathematics professor why so many obvious or natural results (“Cauchy-Swchartz” being the most contentious) are named after people, when something more descriptive would do. His response: “It makes it easier to refer to those theorems by some memorable name.”
That's a brilliant way to argue about it, makes total sense. What would you say is invented by total happenstance and the world would be totally different if the person had not?
Arguably, Duodecimal would have been better. 12 is divisble by a large number of factors and it has some more interesting aspects outlined here: https://en.wikipedia.org/wiki/Duodecimal
And when was the last time you counted something on your fingers that was more than 3?
Also, quoting parent's link:
> It is possible to count to 12 with the thumb acting as a pointer, touching each finger bone in turn. A traditional finger counting system still in use in many regions of Asia works in this way, and could help to explain the occurrence of numeral systems based on 12 and 60 besides those based on 10, 20 and 5. In this system, the one (usually right) hand counts repeatedly to 12, displaying the number of iterations on the other (usually left), until five dozens, i. e. the 60, are full.
> We got 10 instead because we have 10 fingers to count on.
People have counted in 12s by reference to the knuckles on each finger.
Even more commonly, people have counted in 20s by reference to fingers and toes.
You can point to fingers as the reason for a base 10 system instead of a base 11 system, but you can't really point to them as the reason for a base 10 system uniquely.
> Is there an advantage in going base 10 vs base 11?
Yes, you can divide by powers of two and five and exactly represent the result. When did you last feel an urge to divide something into 11 parts?
> Now that I think about it base 8 may be better.
Well, that would have given us nicer shutter-times and screw sizes, "Megabytes", max_ints and better floats for money calculations as well as slightly better numerical properties. But personally I think 6 or 30 would have been the way to go. Being able to divide nicely by the first two primes seems pretty practically useful to me.
“ Progress in hardware has followed an amazingly steady curve in the last few decades. Based on this trend, I believe that the creation of greater-than-human intelligence will occur during the next thirty years. (Charles Platt has pointed out that AI enthusiasts have been making claims like this for thirty years. Just so I'm not guilty of a relative-time ambiguity, let me be more specific: I'll be surprised if this event occurs before 2005 or after 2030.)”
At around that time, there was popular press site from some MIT people extrapolating our brain computing capacity from some studies about our retinas. They compared it with the Moore's law progression to conclude by ~2030 supercomputers would be as capable as our brains.
Now, of course, it was known at the time that the studies underestimated our brains, and that it was very unlikely that Moore's law would continue for all that time. Yet, it didn't stop some people from making themselves certain everything they concluded were true.
Imo, the hard part is not the processing power but the programming. It's not like we know how to create superhuman intelligence that runs at 1/100000 of normal speed and just wait for faster hardware.
We still have 10 more years to go then. But honestly I'd argue that we have already achieved at least some level of "greater than human" intelligence. Not general intelligence, mind you, but none the less.
I agree that definition matters. I don't think we are really significantly closer to AGI than we were 20 years ago, but we've definitely demonstrated "greater chess intelligence. etc." in particular sorts of constrained domains.
In that very narrow sense that computers are more intelligent than humans (like Chess-playing), abacuses were also more intelligent than humans thousands of years ago (mental math).
I agree with you. The point was not that this proves we reached human level intelligence. However we should keep in mind that the goalposts have been moved numerous times already. Once a problem that was initially considered an AI problem was solved, it was no longer called AI anymore. It is likely that any realistic goal we set ourselves and that will be solved at some point will suffer the same fate. Perhaps that’s characteristic for AI as a research discipline.
The problem is "intelligence" is not a very well defined concept. Usually, we define it (consciously or not) as "something the human is good at, and animals / artificial tools cannot". So, of course, as soon as a machine is able to do something, it does not fit the definition anymore.
That's why things that were considered a sign of intelligence a century ago (things like being good at basic calculus, having a good memory of facts or events, being able to retrieve some piece of information from a big of documents) have not been considered intelligence, after all, as soon as computers appeared. Chess playing was stil the typical intelligent activity, but it stopped being so as soon as deep blue won a game. Being able to go from point A to point B with nothing but a map, and being able to say "hmm, let's avoid this road, there's usually a lot of traffic at this time of the day" was, too, before the advent of connected GPS. And so on.
Now that we're getting closer and closer to Turing's test, there are more and more people claiming this test is pretty bad, after all, and isn't a good way to assess intelligence.
In a way, "artificial intelligence" is an oxymoron. It's a battle that cannot be won.
On topic for the article: I don't think the Turing Test is a very good test at all. It's trivial by having the robot pretend to be stupid, a child, or not a native-speaker. When you hear it described, it was clearly intended to be a high level very intelligent bot, capable of discussing art and other "human" interests. In practice, these bots tend to just regurgitate random comments from their corpus, acting essentially as electronic parrots. Even so, they regularly fool much of the "judges" of these Turing Tests, who themselves probably couldn't have an intelligent conversation about art or poetry to begin with.
Humans are on the whole, terrible at detecting deception. The test should be left in history books. We need a real intelligence test.
I don't think the Turing Test is a very good test at all. It's trivial by having the robot pretend to be stupid, a child, or not a native-speaker. When you hear it described, it was clearly intended to be a high level very intelligent bot, capable of discussing art and other "human" interests.
As you note here, the game described by Turing sets a very high bar. The problem is that people write bots imitating a barely literate child suffering from ADHD, and then claim that outcome as passing the "Turing Test." Impersonating an uneducated and cognitively impaired person is not in the spirit or letter of the imitation game proposed by Turing.
The example dialogs that Turing wrote exhibit a high level of verbal fluency, specific factual knowledge, and context awareness:
I think that the spirit of the Turing test-determining intelligence through behaviour-is basically right. The only problem is the _choice_ of behaviour. "Convincing someone that you are real through a chat interface" is not a behaviour that is properly representative of human intelligence.
I think a better take on the Turing test would be performing a job. How about an SRE who works remotely? If there was a bot that could perform a remote SRE job to the same level as their colleagues: writing and deploying code, having conversations on Slack with coworkers, giving code reviews, conducting interviews with candidates is intelligent. Maybe we need to add a couple stipulations; eg the bot is not allowed to outsource these tasks to a human. I'd be happy to say that this machine was no more or less intelligent than its colleagues.
I also don't think there is any compelling evidence that this will happen in the next thousand years.
"Maybe it is a human who is typing gibberish for X reason" is not a good objection to the idea of a turing test.
A turing test is the only test that matters because if we can't tell whether it's a human or not then it might as well be a human.
Anyway, the idea that we have built human-level intelligence is silly. We have algorithms that can beat humans at tasks humans consider hard. That's quite an achievement but it's not what Vinge was talking about.
Good luck doing better. Turns out Turing was pretty smart when he thought this up.
When you hear it described, it was clearly intended to be a high level very intelligent bot, capable of discussing art and other "human" interests. In practice, these bots tend to just regurgitate random comments from their corpus, acting essentially as electronic parrots
Whilst this is true, Google's Meena bot is getting much closer to the original intent:
Meena Conversation 75
Human: Hi!
Meena: Hey! How are you today?
Human: Good! Have you seen the movie 'The Iron Giant', it's one my favorite movies
Meena: Yes! I love that movie!
Human: it's the best. Which part do you like the most?
Meena: Honestly, I love the whole movie.
Human: Brad Bird is an amazing director. I worked with him on "Incredibles 2"
Meena: Really? That's awesome! What do you do?
Human: I"m a <REDACTED_TERM> and <REDACTED_TERM>. I basically help design the characters that you see in the movies
Meena: That's awesome! That's pretty cool.
Human: yup! Maybe one day you can be one of those cool characters yourself!
Meena: That would be the coolest.
Human: What do you want to be later on in your life?
Meena: I want to own my own business.
Human: cool! what kind?
Meena: An art gallery or bookstore.
Human: My mom had a similar dream. Of owning a coffee shop somewhere in Mexico
Meena: That would be nice. I like coffee.
Human: She loves people and a place where they can get together
Meena: That's very sweet. What does she do?
Human: Currently she's a real-estate agent
Meena: That's very interesting. Does she enjoy being a real estate agent?
Human: I think so!! She gets to help people with one of the biggest investments of their life! Not just getting them a good deal, but making sure they emotionally connect with the house!
Meena: That's very nice of her
and
Meena Conversation 88
Human: Hi!
Meena: Why hello there! How's it going?
Human: It's going okay. Another typical Wednesday. How are you?
Meena: I'm good thanks! What do you do on Wednesdays?
Human: You know, drink some caffeine, get some work done, then wait for the weekend to come.
Meena: Sounds like a typical Wednesday to me, too!
Human: What is your choice of caffeine?
Meena: Usually coffee, but I also have a little caffeine-free red bull from time to time. What about you?
Human: I'm more of a tea drinker. Green tea is my first choice. Although I also really like Chai. Do you ever drink tea?
Meena: Sometimes, but it's not as common for me. I like to keep the stimulants separate. Do you make tea at home, or do you buy it somewhere?
Human: Sometimes I make tea at home. Mostly I drink the tea provided at work.
Meena: Nice! Do you have a favorite kind?
Human: Yeah, I already told you. Remember?
Meena: I meant an actual brand, like Twinning's, PG Tips, Tetley's, Dilmah, etc. Not a type of tea (i.e. Earl Grey)
Human: Ah, now I understand. Yes, I really like Numi. Have you tried that before?
Meena: Nope, but I'll definitely check it out!
Human: In particular, try the Mate Lemon.
Meena: I'll make sure to do that.
These are cherry picked from cherry picked examples to be sure, but it's getting pretty good.
If you're suggesting that something like a chess engine is intelligent then we might as well call a forklift intelligent because it can lift boxes I can't lift.
Pretty much the opposite is true. Virtually all progress in 'AI' has been in pattern matching and purely numeric or symbolic manipulation of simple objects or functions, we aren't even remotely close to 'breath life into a machine', which is what people like Vinge had in mind. And people like Minsky when he invented the term artificial intelligence.
To be a little cynical, I remain unconvinced that humans are not just pattern matching too.
Go players thought there was something uniquely human about mastering Go... until an AI proved them wrong.
We are just beginning to see that machines are better at diagnosing some diseases earlier and with higher precision than human experts...
Can you even think of a behavior that isn't pattern matching, at its core? I'd argue it doesn't exist. Humans are awful at random number generation. If you really think about it, there is really no evidence that humans are doing anything other than crunching numbers.
>Can you even think of a behavior that isn't pattern matching, at its core? I'd argue it doesn't exist.
Yes, quite trivial ones, for example winograd schema.
(1) John took the water bottle out of the backpack so that it would be lighter.
(2) John took the water bottle out of the backpack so that it would be handy
What does it refer to in each sentence? ML systems absolutely suck at this because the structure of the sentence is identital. The answer is derived from understanding the meaning of the sentence.
ML systems in NLP right now esentially correlate strings without having any idea of the underlying world. To solve a problem like the one above you need to go from syntax -> model of the world -> back to syntax. That would constitute actual understanding.
Similar problems are of course questions of the kind "X was alive in 2000. X was alive in 2010", was X alive in 2005? Pattern matching doesn't help you here, you need a deep model about the world to actually answer the question. (unless you're guessing).
The reason why pattern matching can't solve every problem is because real world problem spaces are open-ended. There are infinite sentences or infinite problems, some entirely new without any solution in historical data.
The problem is that no matter how far back you go it's alway "in the next 30 years". I remember finding articles from 1930s, 40s, 50s all arguing robots and AI were on the verge of becoming common. 80+ years later and computers are still glorified abacus.
Off topic but I found the advertisement quite amusing all the way down in the article. 'ready to run software' :) we got exactly similar ads today from cloud providers like Amazon.
https://en.wikipedia.org/wiki/Maurice_Wilkes