Hacker News new | comments | show | ask | jobs | submit login
IBM is not doing "cognitive computing" with Watson (2016) (rogerschank.com)
904 points by dirtyaura 5 months ago | hide | past | web | favorite | 435 comments



The author's particular gripe is that the Watson advertisements showing someone sitting down and talking to "Watson." They bother me as well (and did so when I was working at IBM in the Watson group) because they portray a capability that nothing in IBM can provide. Nobody can provide it (again to the author's point) because dialog systems (those which interact with a user through conversational speech) don't exist out side specific, tightly constrained, decision trees (like voice mail or customer support prompts).

If SpaceX were to advertise like that, they would have famous people sitting in their living room, on mars, and talking about what they liked about the Martian way of life. In that case I believe that most people would understand that SpaceX wasn't already hosting people on Mars.

Unfortunately many, many people think that talking to your computer in actually already possible, they just haven't experienced it yet. Not sure how we fix that.


It all goes back to how the Watson that played jeopardy - what the vast majority think of when they hear the word "Watson" - was a really cool research experiment and amazing advertising.

A lot of the people that pay for "Watson" probably think they're paying for something really similar to the Watson that beat Ken Jennings at Jeopardy! and cracked jokes on TV. They're paying for something that might use some of the same algorithms and software, but they're not actually getting something that seems as sentient and "clever" as what was on TV.

To me, the whole "Watson" division does seem like false advertising.


Yup. And they have sold it as a "potential solution" to everything data processing related. Hell, they even got NASA to pay for it:

https://www.space.com/35042-ibm-watson-computer-nasa-researc...


> Unfortunately many, many people think that talking to your computer in actually already possible, they just haven't experienced it yet.

Given how often another person can't correctly infer meaning when people talk, I doubt it will ever be how people imagine it.

Initially, I imagine it will be a lot of people trying to talk normally and a lot of AI responses asking how your poorly worded worded request should be carried out, choice A, or choice B. You'll be annoyed because why can't the system just do the thing that's right 90% of the time? The problem is that 10% of the time you're just not explaining yourself well at all and 10% is really quite a lot of the time, and probably ranges from a few requests to dozens of requests a day, and while being asked what you meant is annoying, having the wrong thing done can be catastrophic.

As humans, we get around this to some degree by modeling the person we are conversing with as thinking "what would they want to do right now", which is another class of problem for AIs I assume, so I think we may not get human level interaction until AI's can fairly closely model humans themselves.

I imagine we'll probably develop some AI pidgin language that provides more formality that as long as we follow it AI voice interactions become less annoying (but you have to spend a bit of time before to formulate it).

Then again, maybe I'm way off base. You sound like you would know much better than me. :)


In a previous job I looked into the viability of pen computing, specifically which companies had succeeded commercially with a pen-based interface.

A lot of folks are too young to know this, but there was a time when it was accepted wisdom that much of personal computing would eventually converge to a pen-based interface, with the primary enabling technology being handwriting recognition.

What I found is that computer handwriting recognition can never ever be good enough to satisfy people's expectations. Why? Because no one is good enough to satisfy most people's expectations for handwriting recognition, including other humans, and often even including themselves!

Frustration with reading handwriting is a given in human interaction. It will be a given in computer interaction too--only, people don't feel like they need to be polite to a computer. So they freely get mad about it.

The companies that had succeeded with pen computing were those that had figured out a way to work around interpreting natural handwriting.

One category was companies who are specialized in pen computing as an art interface--Wacom being the primary example at the time. No attempt at handwriting recognition at all.

The other was companies who had tricked people into learning a new kind of handwriting. The primary example is the Palm Pilot's "Graffiti" system of characters. The neat thing about Graffiti is it reversed customer expectations. Because it was a new way of writing, when recognition failed, customers often blamed themselves for not doing it right!

We all know what happened instead of pens: touchscreen keyboards. The pen interface continues to be a niche UI, focused more on art than text.

It's interesting to see the parallels with spoken word interfaces. I don't know what the answer is. I doubt many people would have predicted in 2001 that the answer to pen computing was a touchscreen keyboard without tactile keys. Heck people laughed about it in 2007 when the iPhone came out.

But I won't be surprised if the eventual product that "solves" talking to a computer, wins because it hacks its way around customer expectations in a novel way--rather than delivering a "perfect" voice interface.


> The other was companies who had tricked people into learning a new kind of handwriting [...] It's interesting parallels with spoken word interfaces. I don't know what the answer is.

Well, here's a possibility: we'll meet in the middle. Companies will train humans into learning a new kind of speech which is more easily machine-recognisable. These dialects will be very similar to English but more constrained in syntax and vocabulary; they will verge on being spoken programming languages. They'll bear a resemblence to, and be constructed in similar ways to, jargon and phrasing used in aviation, optimised for clarity and unambiguity over low-quality connections. Throw in optional macros and optimisations which increase expressive power but make it sound like gibberish to the uninitiated. And then kids who grow up with these dialects will be able to voice-activate their devices with an unreal degree of fluency, almost like musical instruments.


Of course, voice assistants are already doing this to us a bit. Once I learn the phrasing that works reliably, I stick with it, even though repeating "Add ___ to shopping list" for every item feels like the equivalent of GOTO statements.

The challenge with learning anything new from our voicebots is the same as it already is: discoverability. I don't have the patience to listen to Siri's canned response when it's done what I've asked it to, and I'm probably not going to listen to it tell me "Next time, you can say xyz and I'll work better."

The easiest way to learn a new language is to be a kid and listen to native speakers converse. Unless Amazon produces a kids' show starring a family of Echos speaking their machine code, I don't see that happening.


I miss the times before voice assistants, when it was expected of user to train a voice interface. That way I could, if needed, negotiate pronunciation with the machine.


Your comment reminded me of an interesting video I saw a while back. The video is a talk [0] by a programmer explaining and demonstrating how he developed and used a voice recognition system to program in emacs, after a significant RSI injury.

The commands and syntax he uses to interact with the system might be a decent approximation of the custom English syntax you suggest.

[0]: https://youtu.be/8SkdfdXWYaI?t=537


If that happened, but went the way of Palm Pilot's "Graffiti" system, what would be the analogue of touchscreen keyboards?


I'd just like to see a voice interface that has any attempt at natural error correction.

We do this in a hundred ways in English, emphasis, meta-words, too many to fold into a system. But with a little user prodding, I think you could cover enough of this ground to still sound almost natural.

  Asst.: Would you like me to read that back to you?
  User: Yes
  A: The click brown ox...
  U: The _quick_ brown _fox_...
  A: Ok. The the quick brown fox jumped over the lazy log. 
  Is that correct?
  U: No.
  A: Which part [is incorrect]?
  U: The the quick
  A: The the quick. What should it be?
  U: The quick.
  A: The quick brown fox jumped over the lazy log. Is that 
  correct?
  U: No. Lazy dog.
  A: Lazy log?
  U: No. Lazy dog.
  A: Spell "log." <Or "Spell" + recording of user's voice saying ambiguous word>
  U: D-O-G.
  A: Ah. The quick brown fox jumped over the lazy dog. Do 
  I have it now?
  U: Yes.
  A: Say 'send' to send message.
Agreed that voice systems are an incredibly hard problem, but I suspect rule-based natural language correction systems would go a long way.


I would absolutely love a system like this. Like having a pet that you teach to sit. It’s so rewarding when they get it right. :)

If people enjoyed this kind of interaction with their computer, it would become common place and they'd likely become so much better at interaction.


I'm not so sure. Maybe I'm naive but I could see imitation and reinforcement learning actually becoming the little bit of magic that is missing to bring the area forward.

From what we know, those techiques play an important roles in how humans themselves go from baby glibberish to actual language fluency - and from what I understand there is already a lot of research ongoing how reinforcement learning algorithms can be used to learn walking cycles and similar things.

So I'd say, if you find a way to read the non-/semiverbal cues humans use to communicate understanding or misunderstanding among themselves and use those as penalties/rewards for computers, you might be on a way to learn some kind of languages that humans and computers can communicate with.

The same goes into the other direction. There is a sector of software where human users not only frequently master highly intricate interfaces, they even do it out of their own motivation and pride themselves on archieving fluency: Computer games. The secret here is a blazing fast feedback loop - which gives back penalty and reward cues in a similar way to reinforcement learning - and a carefully tuned learning curve which starts with simple, easy to learn concepts and gradually becomes more complex.

I would argue, if you combine those two techniques - using penalties and rewards to train the computers and communicate back to the user how well the computer understood you, you might be able to move to a middle-ground representation without it seeming like much effort on the human side.


The first generation Newton was insanely good at recognizing really terrible cursive handwriting and terrible at recognizing tidy or printed handwriting. The second generation Newton was very good at recognizing tidy handwriting (and you could switch back to the old system if you were one of the freaks whose handwriting the old system grokked). The Newton also had really nice correction system, e.g. gestures for changing the case of a single character or word. In my opinion the later Newton generations recognized handwriting as well as you could expect, when it failed you forgave it and could easily correct the errors, and was decently fast on the 192MHz ARM CPUs.

Speech is way, way harder.


I got a two-in-one Windows notebook and I was really, really impressed with the handwriting recognition. It can even read my cursive with like 95% accuracy, which I never imagined. That said, it's still slower than typing.


> ...That said, it's still slower than typing.

Finger-on-nose! It seems to be a question of utility and efficiency rather than primarily an emotional response to effectiveness of a given UI.

I type emails because it's faster. I handwrite class notes because I often use freeform groupings and diagrams that would make using a word processor painful.


On the other hand, learning to type took some effort and I see many people who do not know how to properly type. The case for handwriting might have looked better if you assumed people would not all want to become typists.


Back in the early 2000s, the hybrid models (the ones with a keyboard) became the most popular Windows tablet products. A keyboard is a pretty convenient way to work around the shortcomings of handwriting recognition.

Most owners eventually used them like laptops unless illustrating, or operating in a sort of "personal presentation mode," where folding the screen flat was convenient. For most people, handwriting recognition was a cool trick that didn't get used much (as I suspect it was for you, eventually).

The utility of a handwriting recognition system is ultimately constrained by the method of correcting mistakes. Even a 99% accuracy means that on average you'll be correcting about one word per paragraph, which is enough to feel like a common task. If the only method of correction is via the pen, then the options are basically to select from a set of guesses the system gives you (if it does that), or rewrite the word until it gets recognized.

But if a product has a keyboard, then you can just use the keyboard for corrections. But if you're going to use the keyboard for corrections, why not just use it for writing in the first place...


That's all true, but for a phone the writing input might be worth considering. That might be too optimistic though.


> it's still slower than typing

In alphabetic languages. What about Chinese, or Japanese?


There is software that allows you to draw the characters on a touch screen, but it's still slower than typing and most people either type the words phonetically (e.g. in Pinyin) or use something like Cangjie which is based on the shape of character components. Shape-based methods are harder to learn but can result in faster input.

For Japanese there are keyboard-based romaji and kana based input methods.

The correct Hanzi/Kanji is either inferred from context or, as a back-up, selected by the user.


Almost nobody uses kana input in Japanese. There are specialized systems for newspaper editors, stenographers, etc., that are just as fast as English-language equivalents, but learning them is too difficult for most people to bother.


Huh? The standard Japanese keyboard layout is Hiragana-based, and every touchscreen Japanese keyboard I've used defaults to 9key kana input. Do people really normally use the romanji IME instead?


Yes. Despite the fact that kana are printed on the keys, pretty much no Japanese person who is not very elderly uses kana input. They use the romaji (no N) input instead, and that's the default setting for keyboards on Japanese PCs.

You are right that the ten-key method is more common on cell phones (and older feature phones had kana assigned to number keys), although romaji input is also available.


Typing in Japanese is about half as fast as English if you're good. Still better than handwriting.


On the other hand, typing on a smartphone is faster in Japanese. (If you use the 10-key input method.)


Not sure about that. I guess it depends on how good the prediction is for your text, but in general the big problem is that the step where you select what character you actually meant takes time.


I've always thought it'd be nice to be able to code at the speed of thought - you can often envisage the code or classes that'll make up part of a solution, it's just so slow filtering that through your fingers and the keyboard; if the neural interface 'thing' could be solved that'd be a winner I think, both for coding and manipulating the 'UI'.

It seems within the bounds of possibility to me (from having no experience in the field ;), if patterns could be predictably matched in real-time or better time than it takes to whack the keyboard or move the mouse (can EEG or FMRI make predictable sort of patterns like that?).


I can't envision code or classes hardly at all. Most of my thoughts are so nebulous it takes my full concentration just to write them down.

I doubt we're going to make much progress there anytime soon.


>I've always thought it'd be nice to be able to code at the speed of thought - you can often envisage the code or classes that'll make up part of a solution,

In my experience that 'solution' isn't a solution but rather a poorly guessed attempt at one single part of it. Whenever I've thought I have the architecture of a class all figured out there is one corner case that destroys all the lovely logic and you either start from scratch on yet another brittle monolith, or accept that you are essentially writing spaghetti code. At that point documentation that explains what does what and why is the only way to 'solve' the issue of coding.


Unfortunately commercial-grade non-invasive EEG headsets can only reliably respond to things like intense emotional changes and the difference between sleep/awake state, but not much more than that. Awhile ago I had some moderate success classifying the difference between a subject watching an emotional short film vs. stand-up comedy. Mostly they are effective at picking up background noise and muscular movement and not much else.

This is based on my experience with the Muse [1] headset, which has 4 electrodes plus a reference electrode. I know there are similar devices with around a dozen or so electrodes, but I can't imagine they're a significant improvement.

So I think we're still a long way off from what you're describing :(

[1] http://www.choosemuse.com


This comment is way too long to justify whatever point you are making. First of all, how long ago do you mean? Are you 100 years old and people in the 1940's thought computers would be operated by pens?

But also, when my college had a demo microsoft tablet in their store in 2003, I could sign my name as I usually do (typical poor male handwriting) and it recognized every letter. That seems pretty good to me.

I also had a Palm Pilot then and Grafitti sucked, but that's just one implementation.


This is my most down-voted comment ever and I have no idea why.


In addition to what novia said about your first sentence, your comment is so weirdly off-base. You have very strong opinions about pen computing, but where in reality are they coming from?

No, we are not talking about the 1940s, why would anyone be talking about the 1940s? How does that make any sense at all?

We aren't talking about a Microsoft demo that worked well for you once, either. We are talking about the fact that, since the 2000s when it was tried repeatedly, people do not like to hand-write on computers. It is a strange thing not to be aware of when you're speaking so emphatically on the topic.


I don't have strong opinions about pen computing, but the comment I replied to was just giving a bad impression of the tech in the early 2000s. It worked quite well actually.

Since he didn't specify the timeframe, I gave him the benefit of the doubt and thought it he was talking about before my time.


> It worked quite well actually.

Apart from the problems mentioned in the comment you replied to. You don't seem to have a point other than being contrarian (as a state).


The commenter said that handwriting recognition didn't work well. But, having tried it myself, I know that's not true. My point is to share that data.


Not sure why you would ask for advice on a downvoted comment, but then just argue against the response.


The first sentence is being seen as combative.


I think you have a good point. Something that was fascinating for me was that one of the things the search engine did after being acquired by IBM was looking at search queries for common questions people would ask a search engine. Taking 5 years of search engine query strings (no PII[1]) and turning them into a 'question' that someone might ask.

What was interesting is that it was clear that search engines have trained a lot of people to speak 'search engine' rather than stick with English. Instead of "When is Pizza hut open?" you would see a query "Pizza Hut hours". That works for a search engine because you have presented it with a series of non stopword terms in priority order.

It reminded me of the constraint based naming papers that came out of MIT in the late 80's / early 90's. It also suggested to me ways to take dictated speech and reformat it into a search engine query. There is probably a couple of good academic papers in that data somewhere.

That experience suggests to me that what you are proposing (a pidgin type language for AI) is a very likely possibility.

One of the bits that Jimmy Fallon does on the Tonight Show is an interviewer who asks people a question but uses nonsense words that sound "right" in the question. The most recent example was changing the question

"Do you think you will see Avengers: Infinity War"

With things like "Do you think you will see Avengers something I swear." Similar sounds and people have already jumped to the question with enough context that they always answer as if he had said "Infinity War." How will conversational systems deal with that, I have no idea.


> One of the bits that Jimmy Fallon does on the Tonight Show is an interviewer who asks people a question but uses nonsense words that sound "right" in the question. The most recent example was changing the question

Yeah, that's exactly what I'm talking about. I suspect this works because the interviewees have a model of what they think Jimmy Fallon is going to ask, and when hearing a question, if they can't make out words or they sound wrong, the simplest thing is to assume you heard them wrong and think what it's likely he would have asked. To that, the answer is fairly obvious.

We've all experienced those moments when we're caught off guard when this fails. For example, you're at work and someone you're working with says something that sounds like it was wildly inappropriate, or out of character, or just doesn't follow the conversation at all, and you stop and say "excuse me? What was that?" because you think it's more likely you heard them wrong than to believe you fuzzy matched their statement "Hairy Ass?" and you were correct.


I've been using search engines since Lycos and before, so I reflexively use what you call "search engine" language. However, due to the evolution of Google, I've started experimenting with more english-like queries, because Google is getting increasingly reluctant to search for what I tell it in a mechanistic way, often just throwing away words, and from what I've read it's increasingly trying to derive information from what used to be stopwords.

One thing that is very frustrating now is if there are two or three things out there related to a query, and Google prefers one of them, it's more difficult than ever to make small tweaks to the query to get the one you want. Of course, sometimes I don't know exactly what I want, but I do know I don't want the topic that Google is finding. The more "intelligent" it becomes, the harder it is to find things that diverge from its prejudices.


There was an article in The Atlantic recently that remarked upon this.

The Atlantic: Alexa Is a Revelation for the Blind. https://www.theatlantic.com/magazine/archive/2018/05/what-al...

Then I stop myself. Isn’t it possible that he expects Alexa to recognize a prompt that’s close enough? A person certainly would. Perhaps Dad isn’t being obstreperous. Maybe he doesn’t know how to interact with a machine pretending to be human—especially after he missed the evolution of personal computing because of his disability.

...

Another problem: While voice-activated devices do understand natural language pretty well, the way most of us speak has been shaped by the syntax of digital searches. Dad’s speech hasn’t. He talks in an old-fashioned manner—one now dotted with the staccato march of time. “Alexa, tell us the origin and, uh, well, the significance, I suppose, of Christmas,” for example.


>One of the bits that Jimmy Fallon does on the Tonight Show is an interviewer who asks people a question but uses nonsense words that sound "right" in the question.

There was a similar thing going around the Japanese Web a few years ago, where you'd try to sneak in soundalikes into conversation without anybody noticing, but the example I remember vividly was replacing "irasshaimase!" ("welcome!") with "ikakusaimase!" ("it stinks of squid!").


Humans are really good at handling 'fuzzy' input. Something that is supposed to have meaning, and should if everything were heard perfectly, but which has some of the context garbled or wrong.

At least, assuming they have the proper context. Humans mostly fill in the blanks based on what they /expect/ to hear.

Maybe someday we'll get to a point where a device we classify as a 'computer' hosts something we would classify as an 'intelligent life form'; irrespective of origin. Something that isn't an analog organic processor inside of a 'bag of mostly salty water'. At that point persistent awareness and context(s) might provide that life form with a human-like meaning to exchanges with humans.


No, you're completely on the right track.

People usually judge AI systems based on superhuman performance criteria with almost no human baseline.

For example, both Google Translate and Facebook's translation system could reasonably be considered superhuman in performance because the singular systems can translate into dozens of languages immediately more accurately and better than any single human could. Unfortunately people compare these to a collection of the best translators in the world.

So you're exactly on track, that humans are heavily prejudiced against even simple mistakes that computers make, yet let consistent continuous mistakes slide for humans.


>Unfortunately people compare these to a collection of the best translators in the world.

I don't think that's really true. They're comparing them to the performance of a person who is a native speaker of both of the languages in question. That seems like a pretty reasonable comparison point, since it's basically the ceiling for performance on a translation task (leaving aside literary aspects of translation). If you know what the sentence in the source language means, and if you know the target language, then you can successfully translate. Translation isn't some kind of super difficult task that only specially trained translators can do well. Any human who speaks two languages fluently (which is not rare, looking at the world as a whole) can translate between them well.


> Any human who speaks two languages fluently (which is not rare, looking at the world as a whole) can translate between them well.

No, sadly, that's not the case. A bilingual person may create a passable translation, in that they can keep the main gist and that all the important words in the original text will appear in the translated text in some form, but that does not automatically make it a "well-translated" text. They are frequently contorted, using syntaxes that hardly appear in any natural sentences, and riddled with useless or wrong pronouns.

A good translation requires more skill than just knowing the two languages.


Right, but I don't think many people expect machine translation to do better than that. And I don't think the ability to do good translations is quite as rare among bilinguals as you're making out. Even kids tend to be pretty good at it according to some research: https://web.stanford.edu/~hakuta/www/research/publications/(...

> They are frequently contorted, using syntaxes that hardly appear in any natural sentences, and riddled with useless or wrong pronouns.

I find it hard to believe that this is the case if the translator is a native speaker of the target language. I mean, I might do a bad job of translating a Spanish text to English (because my Spanish sucks), but my translation isn't going to be bad English, it's just going to be inaccurate.


Consider yourself lucky, then. You are speaking English, with its thousands (if not millions) of competent translators translating everything remotely interesting from other languages. The bar for "good translation" is kept reasonably high for English.

As a Korean speaker, if I walk into a bookstore and pick up any book translated from English, and read a page or two, chances are that I will find at least one sentence where I can see what the original English expression must have been, because the translator chose a wrong Korean word which sticks out like a sore thumb. Like, using a word for "(economic/technological) development" to describe "advanced cancer". Or translating "it may seem excessive [to the reader] ..." into "we can consider it as excessive ..."

And yes, these translators don't think twice about making sentences that no native speaker would be caught speaking. Some even defends the practice by saying they are faithful to the European syntax of the original text! Gah.


> Like, using a word for "(economic/technological) development" to describe "advanced cancer"

That sounds like a mistake caused by the translator having a (relatively) poor knowledge of English. A bilingual English/Korean speaker wouldn't make that mistake. I mean, I don't know your linguistic background, but you clearly know enough English and Korean to know that that's a bad translation, and you presumably wouldn't have made the same mistake if you'd been translating the book.

>Some even defends the practice by saying they are faithful to the European syntax of the original text!

I think there's always a tension between making the translation faithful to the original text and making it idiomatic. That's partly a matter of taste, especially in literature.


> A bilingual English/Korean speaker wouldn't make that mistake.

Well, "bilingual" is not black and white. I think you have a point here, but considering that people who are paid to translate can't get these stuff right, the argument veers into the territory of "no true bilingual person".

Anyway, my pet theory is that it is surprisingly hard to translate from language A to B, even when you are reasonably good at both A and B. Our brain is wired to spontaneously generate sentences: given a situation, it effortlessly generates a sentence that perfectly matches it. Unfortunately, it is not trained at all for "Given this sentence in language A, re-create the same situation in your mind and generate a sentence in language B that conveys the same meaning." In a sense, it is like acting. Everybody can laugh on their own: to convincingly portray someone else laughing is quite another matter.


Perhaps they're already using mechanical translation and then only correcting sentences that are basically ungrammatical, not just weird.


People paid to do something are rarely the best at it.

They are however consistent since their pay check depends on it.


>Well, "bilingual" is not black and white.

Not entirely, but it is definitely possible for someone to be a native speaker of two languages, and they wouldn't make those kinds of mistakes if they were.


>They're comparing them to the performance of a person who is a native speaker of both of the languages in question.

Which is synonymous with the best translators in the world. Those people are relatively few and far between honestly - I've traveled a lot and I'd argue that native bi-lingual people are quite rare.


Depends on which part of the world you're in. Have you been to the USA? English/Spanish bilingualism is pretty common there. And there are lots of places where it's completely unremarkable for children to grow up speaking two languages.

https://www.researchgate.net/post/What_is_the_percentage_of_...


This is well said, but one reason this double-standard is rational is that current AI systems are far worse at recovery-from-error than humans are. A great example of this is Alexa: if a workflow requires more than one statement to complete, and Alexa fails to understand what you say, you are at the mercy of brittle conversation logic (not AI) to recover. More often than not you have to start over or worse yet execute an incorrect action, then do something new to cancel. In contrasts, even humans who can barely understand each other can build understanding gradually because of the context and knowledge they share.

Our best AIs are superhuman only at tightly scoped tasks, and our prejudice encodes the utility we get from the flexibility and resilience of general intelligence.


> our prejudice encodes the utility we get from the flexibility and resilience of general intelligence

I don't think any particular human is so general in intelligence. We can do stuff related to day to day survival (walk, talk, eat, etc) and then we have one or a few skills to earn a living. Someone is good at programming, another at sales, and so on - nobody is best at all tasks.

We're lousy at general tasks. General intelligence includes tasks we can't even imagine yet.

For thousands of years the whole of humanity survived with a mythical / naive understanding of the world. We can't even understand the world in one lifetime. Similarly, we don't even understand our bodies well enough, even with today's technology. I think human intelligence remained the same, what evolved was the culture, which is a different beast. During the evolution of computers, CPUs remain basically the same (just faster, they are all Turing machines) - what evolved was the software, culminating in current day AI/ML.

What you're talking about is better explained by prejudice against computers based on past experience, but we're bad at predicting the evolution of computing and our prejudices are lagging.


I might use this in future critical discussions of AI. “It’s not really intelligent.” Yeah, well, neither am I. On a more serious note, it seems obvious to me that technology is incremental, and we are where we are. Given 20 more years of peacetime, we’ll be further along. When VGA 320x200x256 arrived it was dubbed photorealistic. I wonder what HN would have had to say about that.


Yes, that's almost exactly how I start most of these conversations: "Do you know how BAD we are at most things?"

Venkatesh Rao had a great piece on how mediocre we really are recently:

https://www.ribbonfarm.com/2018/04/24/survival-of-the-medioc...


Being able to do many things at a level below what trained humans can isn't what any reasonable person would call superhuman performance. If machine translation could perform at the level of human translators at even one pair of languages (like English-Mandarin), that would be impressive. That would be the standard people apply. But they very cleary can't.


It's a problem with who you're comparing against.

Generally people think superhuman = better than the best humans. I understand this and it's an obvious choice, but it assumes that humans are measured on a objective scale of quality for a task, which is rarely the case. Being on the front line of deploying ML systems, I think it's the wrong way to measure it.

I think Superhuman should be considered relative to the competence level of average person who has the average amount of training on the task. This is because from the "business decision" level, if I am evaluating between hiring a human with a few months or a year of training and a tensorflow docker container that is reliably good/bad, then I am going to pick the container every time.

That's what is relevant today - and the container will get better.


> Generally people think superhuman = better than the best humans.

Because that's what it means.

Also, I shudder to think of the prose coming out of a company that decides to "hire" an AI instead of a translator...


Well not explicitly or in any measurable terms [1]. The term 'Superhuman' lacks technical depth in the sense of measurement. So for the purposes of measuring systems we build vs human capability, it's a pretty terrible measure.

[1] http://www.dictionary.com/browse/superhuman


> I imagine we'll probably develop some AI pidgin language that provides more formality that as long as we follow it AI voice interactions become less annoying (but you have to spend a bit of time before to formulate it).

There's been some tentative discussion of using Lojban as a basis for just that!

https://en.wikipedia.org/wiki/Lojban#Lojban_as_a_potential_m...

http://www.goertzel.org/new_research/Loglish.htm


>> I imagine we'll probably develop some AI pidgin language that provides more formality that as long as we follow it AI voice interactions become less annoying (but you have to spend a bit of time before to formulate it).

That's what Roger Schank is advocating against in his article: artificial intelligence that makes humans think like computers, rather than computers that think like humans.

Personally, I think it's a terrible idea and the path to becoming like the Borg: a species of people who "enhanced" themselves until they lost everything that made them people.

Also: people may get stuff wrong a lot of the time (way more than 10%). But, you can always make a human understand what you mean with enough effort. Not so with a "computer" (i.e. an NLP system). Currently, if a computer can't process an utterance, trying to explain it will confuse it even further.


What's bad about it is that it leads to a "market for lemons" for anyone working in the A.I. field in either engineering or sales. People see so much bullshit that they come to the conclusion that it is all bullshit.


I think this is good.

In a world that moves too quick to have review/ratings companies for everything, you should come in skeptical and be proven wrong.

Its up to the merchant to make a great product, its up to you to compare and make good decisions.


Sure, but Watson isn’t really a product and IBM isn’t really a merchant here. There’s never going to be a Watson review anywhere because (AFAICT) Watson is just a marketing name for a wide range of technologies, none of which do any of the stuff shown in the TV commercials. That’s the problem.

I don’t even know who those ads are targeting. Anyone who knows anything at all about this stuff will be dismayed at the sheer BS of it all. Everyone else seeing the ads isn’t in a position to steer customers towards IBM for all their AI needs. They really need to kill the whole ad campaign, surely it’s doing IBM more harm than good.


Remember these ads?

https://youtu.be/x7ozaFbqg00


No, it's bad because it drives down the potential market value for good products that could become available in the market because the market expects that all AI is inferior to expectations.


Temporarily. Once past (and possibly during) the "Trough of Disillusionment"[1], products that really do solve problems that people are willing to pay to have solved will do very well.

[1] https://en.wikipedia.org/wiki/Hype_cycle


IBM can bullshit it's way through a coming "AI Winter"; startups might not be able to.


How do you "compare" multimillion dollar custom IT installations, before you buy?


By having people who aren't working for the vendor who know things about such installations assess them before they are bought.


The "market for lemons" theory proved that used car dealerships can't exist. And yet they do. People understand that ads are just a hint and a tease, not a precise value proposition.


That isn't the conclusion of the market for lemons. It predicted that the price of used cars would be substantially depressed due to unknown quality, especially when the seller had little reputation. Considering the huge depreciation a new car experiences as soon as it is driven off the lot and that prices are lower when buying from individual sellers rather than used car dealerships, there is plenty of evidence that the theory was correct.


What I find interesting is that free Carfax reports are becoming pretty common, which seems like a mechanism to deal with the problem of a market with only lemons. But I was browsing cars online and started looking for "good" ones, defined as no accidents and regular oil changes at the dealer, and interestingly they were almost non-existent. So in fact, the proposition that cars for sale will be lemons seems to be true even though the compensatory mechanisms exist. It would appear maybe buyers don't utilize information to their advantage, which reminds me of a recent article about how investors don't seem to digest all the information in SEC filings, even though orthodox market theory assumes prices incorporate all public data.


This is not just Watson and IBM. Many, many people in AI make grandiose claims and throw around big words like "Natural Language Understanding" "scene understanding" or "object recognition" etc.

And it is a very old problem, at least from the time of Drew McDermot and "Artificial Intelligence meets Natural Stupidity":

https://homepage.univie.ac.at/nicole.rossmanith/concepts/pap...

From which I quote:

However, in AI, our programs to a great degree are problems rather than solutions. If a researcher tries to write an "understanding" program, it isn't because he has thought of a better way of implementing this well-understood task, but because he thinks he can come closer to writing the _first_ implementation. If he calls the main loop of his program UNDERSTAND, he is (until proven innocent) merely begging the question. He may mislead a lot of people, most prominently himself, and enrage a lot of others.


This is Marketing 101 though. It's easier to sell someone a dream or some emotional state than it is to sell an actual product/thing that you have. I'd give people a little more credit. Adults know what advertisements are and that they're all phony.


I don't disagree, although I'd be a bit more nuanced than that. It is like herbal supplements, you can say how great they make you feel but you have to avoid making medical claims that you can't back up.

I don't think anyone would have flinched at setting the stage like "At IBM we're working on technology so that some day your interaction with a computer can be as simple as sitting down and having a conversation."

Setting expectations you cannot meet is something you try to avoid in Marketing. Because you get angry op eds like the one that kicked off this conversation.


>I don't think anyone would have flinched at setting the stage like "At IBM we're working on technology so that some day your interaction with a computer can be as simple as sitting down and having a conversation."

Like those old AT&T commercials (available on YouTube) where AT&T made all kinds of predictions about the future that seemed crazy in the 1980's, but are reality 35 years later. Video calls, instant electronic toll booth payment, etc...

/ R.I.P, AT&T.


IE, unlimited data plans where unlimited is defined as de-prioritized after X GB per billing month.


I hate tmobile for this reason.

Totally crap, especially since tmobile runs ads that have pink background and 1 word on the screen:

UNLIMITED

No. Its not unlimited. I know because I paid 25 extra dollars for a month for their international plan. Truly unlimited.


A lot of non-technical decision makers don't think this is phony and are making life very difficult for engineers in their companies we will all remember this though. For the future.


There was a time when I'd introduce people to 'Eliza'-type programs. Those programs stashed phrases typed in by users. When new users typed in a phrase, the programs would parrot back the stashed phrases ... based on really crude algo's, or even random selection.

Nonetheless, I watched people get really worked up about what the computer was 'saying'. Partly because of the sarcastic stuff people would say, partly because of their expectations about 'cybernetic brains'.

Now the 'Cyc'le is back. And people actually working on this really hard problem are not helped by dumb marketing.


> Not sure how we fix that

How about prosecuting them for false advertising? Seems like a classic case, albeit in a new domain.


We could also classify fake ads using machine learning...


Red Bull doesn't actually give you wings. Ads are metaphors.


No, the Red Bull ad _uses_ a metaphor.

False advertising is claiming to be something you are not, raising false expectations.


You guys couldn't have picked a more perfect example: Red Bull settled a class action lawsuit for false advertising, not because of "gives you wings", but because their advertising falsely suggested scientific support for their product being better than caffiene-only products.

https://www.truthinadvertising.org/red-bull/


One way to fix it is not to make outright false advertisements.


Or at least require cigarette style warning labels:

Warning, this advertisement is a lie. Believing this advertisement may harm your cognitive health.


Warning, this statement is false.


> I believe that most people would understand that SpaceX wasn't already hosting people on Mars.

You overestimate people's understanding of space. Many people thought "The Martian" was a true story like "Apollo 13".


How many is “many” though? I’m assuming slightly more than thought Star Wars was real, and slightly fewer than think Iron Man was real. That’s not ignorance, that’s mental illness.


> Not sure how we fix that.

Easy: just say that all Watson systems have the hardware needed for cognitive computing and full dialog support.


It’s a pretty serious problem if public company executives who choose how to spend investors dollars actually believe SpaceX’s famous people are on Mars. Then they pay SpaceX to start carrying goods to Mars to sell to the Martians. SpaceX obliges with multi-million dollar contracts, knowing they don’t have a way to get to Mars but have some rockets that can put satellites into orbit (not the same).

We’d all see the problem with such a situation. People with a 6th grade knowledge on space travel understand that the commercials were BS marketing and we should fire the executives for not knowing the same or doing their due diligence. We’d be mad at SpaceX for taking advantage of companies and failing to reach Mars as promised.


In my limited understanding, isn't this just the state of A.I. in general? I don't know of anyone trying to solve the general intelligence problem. Everyone's just finding formula for specific applications and using M.L. to curve fit.


Creating an impression of a capability without legally promising it is the essence of marketing, especially in the corporate IT consulting world which is 90% garbage and hugely expense.


To be fair, you can kinda talk to Google assistant in a "question response" type capacity and it does pretty well.


You can also kinda talk to ELIZA.

Google Assistant, Apple Siri and Amazon Echo are all quite fancy natural language parsers. After parsing, it's just a group of "dumb" (wo)man-made widgets.


There not entirely true. Google Search will answer a question with an automatically extracted excerpt from a website page. Watson did the similar with its Jeopardy system


Is that not parsing and keyword searching? Once again there's no "cognitive computing" going on.

Which is a really weird and nebulous thing to define, by the way. I think we aren't going to have super convincing cognitive computing until we have something approaching AGI. Which of course is waaay different from the AI and machine learning that is popular today, despite most laypeople conflating any mention of AI with AGI. Of course, when IBM is making an ad, they are largely aiming at laypeople.


Only in some cases, supposedly as a fallback.


Search and command interfaces are doable (within reasonable expectations) dialogs just aren't without eye popping engineering challenges (like 100k branches hand engineered onto trees + lots of deep learning.)

Also we (science) know shockingly little about real dialogs and their challenges as far as I can find out - but I am not a linguist and am angling here fore some killer references !


False advertising laws?


Kinda how Tesla advertises autopilot as a self driving car that's safer than human drivers?

> Full Self-Driving Hardware on All Cars

> All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.

https://www.tesla.com/autopilot/


Am I extreme in feeling someone should go to jail for that? It was bad enough when they were originally advertising it, but now that they're defending it even after people died...ugh.

https://www.theverge.com/2018/3/28/17172178/tesla-model-x-cr...


Musk's statements, and Tesla in general, have gone to cultivating an impression that Tesla almost has self-driving. An impression that, okay, it's not perfect, but it's good enough that you can act like it is for most of the time. This impression is far enough from the truth that it's fundamentally dishonest, particularly when the response from Tesla after incidents amounts to "the driver is completely at fault here, we don't need to change anything" (in other words, their marketing says it's basically self-driving, their legal defense says self-driving is only at your own risk).

In the moralistic sense, yes, Tesla needs to be reprimanded for its actions. However, lying in this manner is often well-protected by law, if you manage to include enough legalese disclaimers somewhere, and I suspect Tesla has enough lawyers to make sure that legalese is sufficient to disclaim all liability.


While I don't necessarily agree with how they've advertised it I think they are legally safe. All Model 3's DO have the HARDWARE to support full-self driving, even if the software is not there. And regarding the software, it has been shown to be about 40% safer than humans where it is used, which is what they've claimed.


How do we know the hardware will adequately support full self-driving if it doesn't actually exist yet?

That seems overly optimistic, if not an outright fabrication.


Because humans can do driving with comparable (worse, actually) hardware. Superhuman reflexes mean that software with superhuman driving abilities can exist.


Humans also have human brains, which are much more important to driving than eyes.


This is the part that can be emulated in software. All you have to prove is that what ever platform/language you're using for the programming is Turing Complete, which is the case for almost all of the most popular languages.


So can I run Crysis on my TI calculator? I'm sure whatever platform/language is running on it is Turing Complete. I think you missed the point that the brain is also hardware.


Good point, I'm assuming their Nvidia GPUs have enough power/memory to do the processing they want to do.


> All Model 3's DO have the HARDWARE to support full-self driving, even if the software is not there.

Please provide evidence.

Given that the only full-self driving system out there is Waymo's, which uses completely different hardware than Tesla's, it is impossible to back your claim, unless you develop a fully-self-driving system on top of Tesla's hardware.

So until that is done, your claim is false, no matter how many caps you use.


Sure, they have cameras to see and a computer to do rigorous processing. We can compare this to a human, who uses their vision to perceive the outside world while driving. (You could also argue humans use their hearing, well the Tesla's have mic's if they really wanted to use that).


> they have cameras to see and a computer to do rigorous processing. We can compare this to a human, who uses their vision to perceive the outside world while driving

My phone also has cameras and a processor. Are you implying that my phone is as equipped to drive a car as a human?

A monkey has eyes, a brain, arms and legs. Are you implying that a monkey is as equipped to drive a car as a human?


I'd say the monkey has most of the hardware but not the software, analogously. As for the phone, right I was assuming the cars have enough processing power and memory to do the necessary processing. But granted they haven't exemplified full self-driving on their current hardware, I will have to concede that we do not know how much processing and power are needed to be truly self-driving. For all we know, that last 10% of self-driving capability may require an exponential increase in the amount of processing required.


if we ask what the reasonable expectation is after an advertisement tells a customer that the capability "exceeds human safety" I would say that the average customer thinks of a fully automated system.

This couldn't be further from the truth as automated vehicles still suffer from edge cases (remember the lethal accident involving a concrete pillar) where humans can easily make better decisions.

A system that is advertised as superior to human judgement ought to strictly improve on human performance. Nobody expects that to mean that the car drives perfect 95% of the time but accidentally kills you in a weird situation. This 'idiot savant' characteristic of ML algorithms is what makes them still dangerous in day-to-day situations.


Yes I totally agree, I think there should be some regulation regarding this area. At least in terms of being clear when advertising. I think it's ok to deploy such a system where in some/most cases the AI will help, but it needs to be made apparent that it can and will fail in some seemingly simple cases.


That’s probably a gambit to avoiding being easily sued but it’s a really bad faith attempt to mislead. Most people are going to read that and assume that a Tesla is self-driving, as evidenced by the multiple accidents caused by someone trusting the car too much. Until that’s real they shouldn’t be allowed to advertise something which hasn’t shipped.


So...one person died?

Do you know how many people die in normal, non-self-driving cars? It's never possible to get 100% accuracy.


I hand you a gun and say "you can point this gun at anyone, pull the trigger, and it wont harm them." You point the gun at a crowd, pull the trigger, it goes off and kills someone.

Who do you blame?

No one is saying guns, or cars, aren't dangerous. However this kind of false advertising leads people to use cars in dangerous ways while believing they are safe because they've been lied to.

(Side note, there is some fine print you never read that says the safe-gun technology only works when you point the gun at a single person and doesn't work on crowds.)


Safer than human hardware does not imply safe.

It means this will kill people but fewer people than humans would and they have actual data that backs this assertion up.

The benchmark is not an alert, cautious, and sober driver because that's often not how actual people drive. So right now it's often safer to drive other times it really is safer to use autopilot, net result fewer people die.


But if you do happen to be an alert, cautious, and sober driver, it'd be unfortunate if Tesla's marketing led you to overly rely on Autopilot.

Ideally Tesla's marketing would say it's safer than drunk, sleepy, and reckless drivers, though it might not sell as many Autopilots.


Autopilot should not be accessible to drivers until their driving habits have been assessed and baselined. If autopilot is known to drive more safely than the individual human driver in environments X, Y, and Z then it should be made available in those environments, if not encouraged. That might not make an easy sell since a major value prop is inaccessible to many of the people who really want to use it, but it's the most reasonable, safest path.

I also imagine that cars will learn from both human and each others' driving patterns over time, which (under effective guidance) should enable an enormous improvement in a relatively short period of time.


Drunk, sleepy and reckless drives from an age bracket not normally buying premium sedans.


I can grab the data, but the surprising thing about fatal crashes are the typical circumstance. It's an experienced driver driving during the day, in good weather, on a highway, in a sedan style car, sober. There are two ways to interpret this data. The first is we assume that crashes are semi-randomly distributed, so since this is probably the most typical driving condition it therefore naturally follows that that's where we'd expect to see the most fatalities.

However, I take the other interpretation. I don't think crashes are randomly distributed. That that scenario is absolutely perfect is really the problem because of humans. All a crash takes is a second of attention lapse at a really bad time. And in such perfect circumstances we get bored and take road safety for granted as opposed to driving at night or navigating around some tricky curves. And that's a perfect scenario to end up getting yourself killed in. And more importantly here, that's also the absolutely perfect scenario for self driving vehicles who will drive under such conditions far better than humans simply because it's extremely trivial, but they will never suffer for boredome or lack of attention.


Their claim is about hardware, not software.


Just about every expert agrees though that their hardware is NOT capable without LiDAR (which is probably why they churn through people running their Autopilot program), although proving that in court is a whole thing.


That is not a distinction most people understand.


Or maybe there is no basis for making such a claim.

Or well, I just put four cameras on my car and inserted their USB wires into an orange. I declare that this is enough for FSD now. I don't have to prove anything for such an absurd statement?


And how do you know that the present hardware is enough to deliver full self-driving capability? I don't think anyone knows what precise hardware is required at this point as it wasn't achieved yet.

So, do you think it may be possible that people do understand the distinction and are STILL not convinced?


Well, humans have 2 cameras that they are able to rotate & a brain and that seems to be sufficient for them to drive in terms of hardware... :-)


I, and the law would blame the person. They should have no particular reason to believe your claim and absolutely no reason to perform a potentially lethal experiment on people based on your claim.

You may be breaking the law as well, depending on the gun laws in your area, but as I understand it (IANAL) the manslaughter charge of entirely on the shooter.


I would blame the person, because the best case scenario of shooting an allegedly harmless gun into a crowd is equivalent in effect to the worst case scenario of doing nothing in the first place.


I noticed this in the news... I can't believe someone actually tryed this.

Tesla owner faces 18-month ban for leaving the driver's seat https://www.engadget.com/2018/04/29/tesla-owner-faces-road-b...


One person died from that accident. There are now at least 4 deaths where Tesla's autopilot was involved (most of the deaths in China don't get much publicity, and I wouldn't be surprised if there are more). And the statistics do not back up your claim that Tesla is safer (despite their attempts to spin it that way).


This seems to contradict your assertion: http://www.businessinsider.com/tesla-autopilot-cuts-crash-ra...


No, the NHTSA report says nothing about how Tesla's autopilot compares to human driving. Here are two comments I made last week about this:

In that study there are two buckets, one which is total Tesla miles in TACC-enabled cars and then after the update total Tesla miles in cars with TACC + Autosteer and they calculated on airbag deployments. Human driven miles are going to dominate both of those buckets and there's a reason the NHTSA report makes zero claims about Tesla's safety relative to human drivers. It's totally outside the scope of the study. Then add in that some researchers who are skeptical of the methodology and have been asking for the raw data from NHTSA/Tesla have yet to receive it.

https://news.ycombinator.com/item?id=16932350

Autosteer, however, is relatively unique to Tesla. That’s what makes singling out Autosteer as the source of a 40 percent drop so curious. Forward collision warning and automatic emergency braking were introduced just months before the introduction of Autosteer in October 2015. A previous IIHS study shows that both the collision warning and auto emergency braking can deliver a similar reduction in crashes.

https://news.ycombinator.com/item?id=16932406


I'm not sure what "safer" means. Not sure what posting your insistence in other threads has to do with this. 10x or 100x the number of deaths would be small price to pay to get everyone in autonomous vehicles. It's air travel, all over again and there's a price to pay.


>I'm not sure what "safer" means.

You can define it however you want. By any common sense definition of safety Tesla has not proven that their autopilot is 'safer' than a human driver.

>10x or 100x the number of deaths would be small price to pay to get everyone in autonomous vehicles.

This supposes a couple things. Mainly that autonomous vehicles will become safer than human drivers, that you know roughly how many humans will have to die to achieve that and that those humans have to die to achieve it. Those are all unknown at this point and even if you disagree about the first one (which I expect you might) you still have to grant me two and three.


Ignoring Tesla, self driving cars will almost definitely be safer. People die in cars constantly. People don't have to die if companies don't rush the tech to market like Tesla plans to. To be fair, I blame the drivers, but I still think the aggressive marketing pretty much guaranteed someone would be stupid like that, so Tesla should share some of the blame as well.


I don't think we are opposed to autonomous vehicles.

Can Autopilot not run passively and prevent human errors? Why are the two options presented seem to be only "only a human behind the wheel, not assisted by even AEB" and "Full autopilot with no human in the car at all".


Many people also die when they fall from the stairs.

But if you build faulty staircases that make people slip, they can still put you in jail or sue you.

Same if you push them down the stairs.


Self-driving cars or Tesla autopilot?


> Do you know how many people die in normal, non-self-driving cars

false equivalency. you need at least compare per mile, at best by driver demographic since most tesla drivers are in the high income and maybe in the safety conscious bracket


If those people die because the car fails catastrophically, and predictably, rather than the usual reasons it would be news. This is not about 100% or “the perfect is the enemy of the good” or any bullshit Utopianism. This is about a company marketing a flawed product deceptively for money, while letting useful idiots cover their asses with dreams of level 5 automation that aren’t even close to the horizon.


US advertising laws are very lax, just like data collection laws. You can getaway with saying a lot of shit in the name of advertising.

Deaths caused by Uber and Tesla hold little repercussions for them other than bad PR.

The US govt celebrates removing regulations which benefit its corporations at the cost of its citizens


I think if anything you’re overly conservative for thinking somone rather than many people need jail time for it. I would look up the line of people who made and supported the decision for that kind of fraudulent marketing and drag them all into court.


Jail, no?

Financial ruin for the company that was willing to put that BS below it's letterhead? Yeah, sure, in proportion with the harm caused. That said, human life isn't sacred. It and everything else gets trades for dollars all the time. A death or three shouldn't cripple a big company like Tesla unless they were playing so fast and loose that there's significant punitive damages.

In a large company like Tesla it shouldn't be marketing's job to restrain itself. That's not safe at that scale. There should be someone or a group who's job it is to prevent marketing from getting ahead of reality just like it's security's job to spend all day playing whack-a-mole with the stupid ideas devs (especially the web ones) dream up. Efficiently mediating conflicting interests like that is what the corporate control structure is for.

People using your product in accordance with your marketing should be considered almost the same as in accordance with your documentation. While "people dying while using the product in accordance with TFM" is not treated as strict liability it's pushing awfully close.

I see it as a simple case of the company owning up to its actions. It failed to manage itself properly, marketing got ahead of reality. You screw up and someone gets hurt then you pay up.


Are you saying that you see human life as something that can be bought and sold?

It's one thing to talk about punitive damages and liability, these are factual mechanisms of our legal system. But just because damages can be paid and are on a regular basis they do not imply that there is some socially acceptable let alone codified price for a human life. And we should hope for our own sake there never is.

I agree that marketing should not be allowed to let their imagination run wild to the detriment of the company.

In the case of the liability bit IANAL but that's likely to differ between industries. Some sectors like aviation are highly regulated and require certification of the aircraft and the airplane flight manual is tied to that serial number and is expected to be, correct and free of gross errors for normal operation. So liability can vary. Are you suggesting from experience that there is no liability in the case of Tesla taking into account their industry's context? I don't know enough about their industry to judge, just looking for clarification.


"That said, human life isn't sacred. It and everything else gets trades for dollars all the time."

Wow. Uhm, slavery is illegal if you haven't heard. We made it illegal because life is sacred.


OP is probably referring to the fact that in wrongful death suits, society has put a very tangible financial number on the value of human life. This made it possible for corporations to make trade-off between profit and liability, giving the potential that someone could get enough profit to justify risking others’ lives.

Punitive damages go part way to help prevent this, but not far enough to guarantee that it never happens.

Had the society truly believed life to be sacred, I suspect we’d have very different business practices and penalties that are not limited to financial ruin.


Well, unfortunately we also believe that corporations are sacred, so when bad things happen we shake our fists and collect a couple dollars. But the guilty corporation is never put to death. (Well, rarely ever..)


It's not that cut and clear. Yeah killing and maiming is bad but those things always (and will for the foreseeable future) happen at large scale and you have to be able to have a reasonable discussion about the trade-offs. "Well we can't guarantee we won't kill anyone in an edge case so let's all just go home" isn't an option.

You can build a highway overpass with a center support on the median for X and there will by a small chance of someone crashing into it and getting killed. You could design one without a support on the median but it will cost Y (Y is substantially more than X). Now scale that decision to all overpasses and you've got a substantial body count. At the end of the day it's a trade-off between lives/injury and dollars.


Agreed that their are trade-offs made, of course. But this society spends a ton of time trying to prevent all kinds of death. That's because life is sacred.

It seems odd to argue that. Yeah of course we can't stop doing things, but it doesn't mean we don't try really hard to avoid killing people.


> try really hard to avoid killing people

Yes, and both Elon Musk as an individual and Tesla Motors as an organization agree. How they approach that idea is somewhat different from what we're used to though.

Their basic assertions are (in my words): 1. Vision and radar based technology, along with current generation GPUs and related hardware, along with sufficiently developed software, will be able to make cars 10x safer. 2. How quickly total deaths are reduced is tied directly to how quickly and widely such technology is rolled out and used. 3. Running in 'shadow' mode is a good source of data collection to inform improvements in the software. 4. Having the software/hardware actually control cars is an even better source of data collection to accelerate development. 5. There is additional, incremental risk created when the software/hardware is used in an early state. 6. This is key: the total risk over time is lessened with fast, aggressive rollouts of incomplete software and hardware, because it will allow a larger group of people to have access to more robust, safer software sooner than otherwise would be possible.

That last point is the balance: is the small additional risk Tesla is subjecting early participants to outweighed by how much more quickly the collected data will allow Tesla to produce a more complete safety solution?

We don't know for sure yet, but I think the odds are pretty good that pushing hard now will produce more total safety over time.

> life is sacred

This is my background as well, and its an opinion I personally hold.

At the same time, larger decisions, made by society, by individuals, and by companies, must put some sort of value on life. And different values on different lives. Talking about how much a life is worth is a taboo topic, but it's something that is considered, consciously or otherwise, all day, every day, by many people, myself included.

Most every big company, Tesla Motors included,make decisions based on these calculations all the time. Being a 'different kind of company' in many ways, Tesla makes these calculations somewhat differently.


That's a pretty cynical calculation to make. And no, we don't typically accept untested additional risk in the name of saving untold later. We test first. There's a reason why drugs are tested on animals first, then trials, then broad availability, but still with scrutiny and standards. This is a well trod philosophical argument, but we seem to have accepted that we don't kill a few to save others. We don't fly with untested jet engines. We don't even sell cars without crashing a few to test them. The other companies involved in self driving technology have been in testing mode. They have not skipped a step and headed straight for broad availability.

Why then does Tesla have a pass? There's no evidence it's actually safer. And there's no evidence that the company is truthful. We don't accept when a pharmaceutical company says, "no, it's good. Trust us." That would be crazy. We should not accept Tesla's assurances with blind faith simply because they have better marketing and a questionable ethical standard.

http://driving.ca/tesla/model-s/auto-news/news/iihs-study-sh...


Are you for real? Statistics prove that Tesla is right on this. I’m not saying human life has no value, and I do empathise with the people who lost someone due to a Tesla crash, but we can see what is the percentage. It IS lower compared to human error, and yes that includes DUI and every other time it was a human error.

Tesla is safer for the community based on the statistical data. Also it is safer than a human driving. I’m not saying it will never crash just that it will crash less than a human.

Maybe another way to compare it would be to an autopilot in airplanes. Sometimes it fails so we might as well be angry at the airline companies if they advertise it. But a pilot would make more mistakes than the autopilot with the current workload.

Side note: if you are going to downvote you can comment on why. Negative feedback is very useful and helps build better argumentative skills.


> It IS lower compared to human error,

What is lower and compared to what? I am tired of this vague misleading statements by Elon musk, Tesla and their supporters.

They claimed Autopilot + driver is 40% safer than a driver alone. And that includes Automatic emergency braking. Is it a surprise that AEB reduces accidents?

Why is that used as proof that Tesla's software is FSD. FSD implies that the car is capable of driving without a driver.

Is Autopilot alone safer than a driver? There is no proof of that

But what you should compare is Autopilot alone compared with a driver assisted by Autopilot. So, is Autopilot better than Driver+Autopilot? I can assure you its not, the driver still adds value.


Its crazy how reasonable ycombinator is compared to reddit.

Reddit is dominated by tesla's marketing team. Upvoting, downvoting, and commenting.

Here you get a real discussion regarding Elon's outlandish promises.


>It IS lower compared to human error

One of the big issues here is the concept of "human error", as though a statistical average is some sort of monolith that can be applied to all humans. The truth is that many people lack the basic skills and attributes needed to drive safely (concentration, attention to detail, the ability to multitask effectively). This is true even before factoring the millions of elderly people who are unable to tie their own shoes or operate a self-checkout kiosk who are out on the road making things more dangerous for everyone (including themselves). Performing virtually any task better than "the average human" is a very, very low bar to hurdle. The question is, what standard should we be using to determine who (or what, in the case of self-driving cars) should be allowed to pilot a multi-ton vehicle on public streets? The truth is that we favor convenience over safety (and I'm not even arguing that this is a bad thing - just that it is an often ignored fact) in many facets of society, and in order to have an intelligent debate about self-driving cars we ought to recognize and acknowledge this fact.


Claiming that the hardware is ready and we're just waiting on the software is at best a vacuous claim and worse, a fraudulent one. I can hammer two web-cams onto a rotating plank and claim that's "full self-driving capability". Just look at the eyes in my head! Now I just need to iron out some kinks in the software...


That is a much better example. With the twist that believing it can get you killed.


"Self-driving" is not the same as "Never going to crash."

People just assume that if a computer does something, it's going to be perfect. It isn't.

Most humans can't even "self-walk" without occasionally stumbling and falling. Does that make human locomotion dangerous and false?


It's not just them. A lot of people think that because NNs can do object identification that self driving cars are almost here. It's similar to thinking that word recognition and pattern matching are going to give us conversational AI. Things like that are prerequisites for AI, but they are hardly the whole thing. Companies are spending billions on this stuff and I hope they're just hedging in case it pans out rather than planning their future around it.


Many people seem to think the arrival of fully autonomous cars is right around the corner; when I say that it will likely take at least 10 years, but probably more like 20, until there will be significant numbers of those on the road they are very surprised.

I admit I recently upped that number from "5-10 years minimum" after seeing the media reaction to recent driver-assist systems related accidents — it became clear to me that the old adage "the autonomous car does not need to be perfect, it only needs to be better than an {average, good, skilled} human driver" is simply wrong, because autonomous car brand X will be regarded - and judged - as if all X cars are one driver. Thus, these cars actually need to be much better than humans in essentially any traffic situation, because it simply won't be acceptable for "autonomous cars of brand X" to kill a few thousand people each year in every country. People would equate this to a single human driver mowing down thousands.

This effect is not necessarily bad, I don't have the inside information, but it's likely (or at least probable) that manufacturers invested in this technology observed the same and drew a similar conclusion.

The question now is whether the car makers make autonomous cars that good, or invest into massive PR campaigning to change public perception towards "each autonomous car is like an individual driver and it doesn't need to do much better than that".

We've seen the latter before. Jaywalking and avenues are just two examples.


Gill Pratt, the head of auto-car R&D at Toyota, has been agreeing with you since he joined in 2015, essentially saying that auto-cars must be better than very good human drivers in all parts of driving, even those that occur rarely. But mastery of those rare events that will be hardest to acquire, since few useful examples are available from which to train. Thus those last few yards to the goal will add years to the development time in subtle ways that will be maddeningly invisible to the public -- until seemingly clueless crashes arise on public roads, perhaps like the one last month which terminated all of Toyota's auto-car testing on public roads until further notice.


Well Jaywalking is only illegal in 10 countries. So PR campaigns won't be enough for that. The benefits of self driving cars as so massive that it will trump anything like that - but we are still a long way away from practical self driving cars.


Self driving cars are not almost there, they are already a reality. Waymo is testing self driving cars hailing passengers without a safety driver since last October and soon it will start a commercial service.


Unfortunately people take that marketing literally https://www.independent.co.uk/news/uk/crime/tesla-autopilot-...


"...have the hardware needed for...". Nothing about the full capability. So no actual claim about the capability, just that the hardware is in place.

Still a bit deceptive, to me. Reminds me of "...electroplated in 16 carat gold...".


But it’s technically correct: the hardware is present. The software isn’t there yet, that’s true. So is this a “lie by omission”? Reading the whole page, there’s this quote: “It is not possible to know exactly when each element of the functionality described above will be available.” What else needs to be said?

At some point, critical thinking is required to navigate the world. The customer should ask “and when will the software catch up?” And I’ll concede that the overwhelming majority of the population doesn’t think like that. Should Tesla update the site with statements to confirm the negative cases? e.g. “Our software can’t yet do X, Y, Z, etc”


Since we have yet to build a system that is capable of "full self-driving ... at a safety level substantially greater than that of a human driver" I'm not even sure they can make the claim that their existing hardware supports this. They might have high hopes, but there's no way to know what combination of hardware and software will be necessary until the problem is actually solved.


Until tesla can provide a real world demonstration of an existing vehicle running at L5 automation after only a software upgrade, it's unproven whether the hardware required is present or not.

At some point, they will have to provide said demo or they will be facing a class action lawsuit that will bankrupt the company.


I don't even think a lot of people would even consider the software side when reading that sentence. To the layperson, "Full self-driving hardware on all cars" sounds exactly like "These cars are self-driving".


> hardware is present

that's false too. how would one prove that the current shipped hardware is sufficient for autopilot? some hardware is present for sure, but is it all needed as they claim? they cannot know, since they don't have an autopilot using such hardware to achieve the stated goal. they believe it is sufficient, which is quite not the same as the claim and it is also being vocally challenged by experts that criticize the decision of omitting full lidar coverage


I would argue even if something is technically correct but deliberately misleading companies should get in trouble for it. Nobody has time or energy to learn about every single thing they may buy.

Also, how can we even know the hardware is sufficient unless said software exists and shows that what we think is possible is indeed possible?


> But it’s technically correct: the hardware is present. The software isn’t there yet, that’s true. So is this a “lie by omission”?

Some hardware is present, but that it is sufficient for full self-driving is simple speculation for which Tesla has no reasonable basis. The only way for them to know it is sufficient is to have self-driving implemented on it, which they obviously don't.

Its at best empty puffery and at most outright fraud, but in either case it's a knowingly unjustified claim, even if not knowingly false.


It's not a lie by omission, it's a lie. If you don't have the software then you don't know whether or not you have the hardware. It's only proven when it works.


Hey, I plugged in 2 cameras into an orange. That is enough for self driving as hardware, I can make that claim right?


Hardware. That’s the keyword, and it might very well be true. Tesla doesn’t claim their software is capable of full self-driving yet and in fact the system will scream at you and stop the car if you take your hands off the steering wheel for too long.


It's deliberately deceptive, and I'm real tired of tech people tripping over each other to defend it. Technically correct? Sure. But deliberately phrased to obscure the fact that this is a hypothetical future capability that the product does not possess today. And please spare me your "well actually, the showroom attendants will clarify exactly what it means" -- they are framing this in a misleading way to generate public interest.


so how do anyone actually know what the hardware is capable of if the software is not ready? I mean there is no way to verify that hardware is ready for anything on its own.


Who is being deceived? Non-Tesla owners?

Or are you suggesting that there are Tesla owners who are (a) sitting in their driveway wondering why their car isn't driving them to work or (b) returning their cars upon finding out many of the features on the Autopilot page require regulatory approval?


https://www.independent.co.uk/news/uk/crime/tesla-autopilot-...

Tesla driver caught turning on autopilot and leaving driver’s seat on motorway Culprit admits he knew what he had done was ‘silly’ but that the car was capable of something ‘amazing’


It's exceedingly clear to me that they're talking about hardware that can support future capabilities -- additional features via software upgrades is something they talk about a lot. And they're very explicit about what their Autopilot system does and doesn't do on their site. There's basically no situation in which someone is buying a $70,000+ car without knowing how the Autopilot works.

It's really not that deceptive. People would have to be a lot dumber than they actually are to be confused about it.


> It's really not that deceptive.

I'd say that if they're claiming to be able to deliver full self-driving autonomy to existing cars via purely a SOFTWARE upgrade, then that's pretty deceptive, since I am pretty sure not even Tesla knows exactly what hardware might be required for such capability. They'll know when they build it, but not before.


That ad is just deliberately deceptive. Not dishonest, it's misleading, and quite some work has been put in the wording, because it's nothing that anyone would ever say in a serious manner. Hardware alone has nothing to do with it, and anyone who's ever done some relevant bit of engineering understands that. If I shove four DSP boards and a laptop in my trunk, then hook them to the CAN bus and to four LIDAR sensors, my car would also have hardware that's fully capable of safe self-driving.

That's one level past dishonest. Honest advertising would just be "none of our cars have full self-driving capability at a safety level substantially greater than that of a human driver".

But this is on another level. Have they ever actually ran software that does, indeed, have this capability? Or do they have a detailed enough specification of that software, so that they can verify that it will, indeed, run on that hardware once it's written?

It's probably a safe bet that they haven't, and they don't, and in fact there is no way to verify that the hardware is in fact capable of running such software. This is likely a good engineering bet (i.e. the hardware is probably enough to support their current software approach, and they'll hit the software's limitations before they hit the hardware's), but advertising engineering bets as safety-related selling points is really irresponsible. Bet with your own money all you want, but not with people's lives.


I know plenty of people who couldn't tell you the difference between hardware and software.


Watson might not be able to see what Tesla is trying to insinuate here...


And when the software is self-driving, it's safe to use the car in the way we all want?

Who's to know if there isn't a third key component necessary for that, behind yet another technical term?


That is a laughably dumb excuse.


What about "safety level substantially greater than that of a human driver" being the keyword?

Isn't Tesla's autopilot doing better than humans do in the same number of miles? We just all hear about it every time a Tesla is in a crash. If we heard about every human driver car accident, it would be overwhelming.

edit:

from this article: https://electrek.co/2018/03/30/tesla-autopilot-fatal-crash-d...

In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.


The numbers are wrong since it compares expensive new cars versus everything else(old cars with no safety features, bikes, buses) if you compare with similar cars you get numbers that are not in Tesla favor. So try not to spread this false numbers, also the Tesla numbers are not open to be examined by third parties.


Yep, as shown by the stats in the latter half of this article: https://www.thedailybeast.com/how-tesla-and-elon-musk-exagge...

Disclosure: I am one of the co-authors


Why is applied physicist in quotes? Elon has a ivy league physics degree, was accepted at Stanford in applied physics, and has clearly applied a knowledge of physics at numerous companies.


Not to mention that

every 320 million miles in vehicles equipped with Autopilot hardware.

does not mean 320 million miles with autopilot controlling the vehicle. Who knows how many of those miles are human driven?


This is again deceptive. They are comparing against total miles driven by Tesla vehicles with Autopilot hardware, not necessarily in Autopilot mode.

Since a Tesla is often owned by people with better economic means, their statistics is not representative of all vehicles. The average vehicular fatality rate of this demographics and vehicle type could be well below that of Tesla with Autopilot.


When the Autopilot software is disabled, what about the Autopilot hardware makes the car any safer?

What is the accident statistic for Teslas not equipped with Autopilot hardware?

If Autopilot enabled cars are so much safer than human drivers, can they pass a written and road side driver's license test?


We fix that by building conversational AI ;)


The most ingenious trick that the IBM marketing department pulled was to get non-technical (and probably even technical people, judging by this thread) to think that Watson is some kind of singular thing. Like that it’s a single big neural network with different APIs on it, or something. I honestly think that’s what most people think Watson refers to.

Watson is like Google Cloud Platform. It’s just a name for a platform with a bunch of technologies.

E.g. Watson Natural Language Understanding was previously AlchemyLanguage. It was just rebranded.

It’s very clever though, I’ll give them that. Use a human name so it has all the anthropomorphic connotations and let people think it’s some kind of AI learning things.


I'm not even convinced Watson is a platform. My impression is that it's just a consulting division of the company that deploys teams to build solutions that are in some way related to AI, with each solution or implementation potentially being completely unique from the ground up. Perhaps someone from IBM can correct me though.


I'm currently sitting in a meeting about implementing the Watson Enterprise Search product in my company and that is more or less the impression I've gotten. They sell it as a platform that is easy to customize and then once you're in they bill you tons of hours to help you because the system is indecipherable and poorly documented.


And that, folks, is enterprise software services/consulting in a nutshell!!


They sell it as a platform that is easy to customize and then once you're in they bill you tons of hours to help you because the system is indecipherable and poorly documented.

So pretty much like any major enterprise system from the likes of IBM, SAP, Oracle, ...


James we can all see you with your phone under the table


Sounds like every IBM product: WebSphere, Tivoli, WSSR, RAD, fuck even AIX. Many of those can be replaced with open source tools at a fraction of the cost and at a huge increase in performance.


Customer collaboration at its finest. Documentation checks out too.


How did the meeting go?


Watson is a brand name. Specifically It's the Machine Learning brand name. Watson Developer Cloud is the product suite and it's just a set of pre-trained classical, machine learning, and deep learning based APIs for a variety of tasks. NLU(UIMA) text identification, NLC(Fuzzy String Matching), Visual Recognition, Tone Analysis(VADER), Discovery (Document Database + NLU + Knowledge Graph), Speech (STT/TTS), Text Translation (Literal not Semantic), Assistant (Conversational State Engine with embedded linguistic neural net). We're ahead in some aspects and a bit behind in others. There is also a generic Machine Learning Service which allows you to train Classical or Deep Learning algorithms and push them to a rest endpoint for production use. Ultimately the "Watson" from jeopardy was sliced up and pieces stuffed into various products. Anything with a smattering of AI/ML gets the Watson brand on it. I personally hate the Watson commercials as people who don't know anything about the subject think Watson is this singular sentient entity. Those who do know about AI/ML know we have the same general tech as everyone else. One benefit we do have though is petabytes of training data and expertise in just about every line of business on the planet.


I only know specifically about the NLP stuff, e.g. Natural Language Understanding (AlchemyLanguage), Natural Language Classification (it's just a multi-label text classifier) and Watson Knowledge Studio (Basically allows you to create your own named entity recognition classifier (NERC), also supports relations and co-reference resolution. You manually hand-annotate examples through a Web UI).

So by platform I mean, lets say you train a NERC model using Watson Knowledge Studio. Obviously this model has to be "deployed" somewhere so you can call it using an API. They host it for you and they bill you per API call. Anyone can go create their own entity type system and manually annotate a training dataset. So it's definitely a re-usable platform, you don't need to pay for any IBM consultancy to use it. I found that the NLP offerings have many problems, and that the documentation alone is not enough to help resolve all of them. So eventually, IBM will just tell your employer you're stupid and that's why it's not working as it should and you should pay IBM to come in.

But make no mistake, these are all just standard machine learning tools that have been "packaged" so end-users can use them through a web front end. It is in no way, whatsoever, getting any input from any AI/Neural Network/Database/whatever you want to call it/ thing called "Watson".

I personally think it's disingenuous because when people hear Watson they think Jeopardy and they think that somehow that technology is involved when they use any of the Watson.* products.


> I personally think it's disingenuous because when people hear Watson they think Jeopardy and they think that somehow that technology is involved when they use any of the Watson.* products.

The use of the Watson name is a deliberate attempt to take advantage of the Jeopardy game. It's a name that has cachet, and I've seen just enough of the marketing perspective to know that marketing will push very hard to reuse a successful name.


I used to work for IBM, on a backend service used by various Watson (and non Watson) branded projects.

I think federation would be a better term. There was a core set of APIs and hardware that might be called "Watson proper" but each market segment would be handled by a different organization. And then there was the proliferation of odd ball things out of research or little groups looking for growth/stability that get Watson branded.

Sometimes we'd be the first time a team relaizes there is already something doing what they've been building.


At my previous firm I worked with a pre-sales engineer who was formerly at IBM Watson before working at the firm. This is essentially it. Implementations of Watson were no different than doing an ERP project.


I knew people who had their division pay for Watson as a way to get their business AI and data science needs fulfilled without hiring developers outside their price range.

Eventually they scrapped the project because it not only took a ton of employee time to talk with IBM's team to get it set up and working, it also cost a significant chunk of money and wasn't as good as what the people who already worked at the company thought they could do themselves.

With all due respect to the people that work at IBM, I just can't imagine IBM's sales and consulting cultures to work well with deploying AI. I don't know firsthand, but from what I've heard and what I would guess anyway, a lot of the people selling Watson and actually on the front lines working with it probably aren't that knowledgeable about AI/ML/whatever. I just don't see how you could determine a project's feasibility or effectiveness without having a sharp conceptual knowledge of the actual AI algorithms and what potentially what kind of data is needed to make them shine.

Suppose a university admissions department offers paper surveys to prospective students at the end of on-campus tours. In an effort to improve admitted student yield (the percentage of students that actually attend the university after being accepted), the university wants to be able to scan these surveys' text digitally and then perform sentiment analysis to determine how excited the student is about attending the university, or more directly, how likely the student is to matriculate. The university doesn't have any people capable of doing this, so they get into contact with Watson.

How much will the salespeople at Watson pry into the questions of the survey, demographics and culture of the school, or the sample size? Will they ask about statistics such as acceptance rate, yield, and which students are most likely to matriculate (based on quantifiable metrics)? Even the type or color of paper and text field sizes on the surveys on could affect the feasibility of the project regarding OCR, or bias the responses toward short answers. I would argue that a lot of knowledge about the project would be necessary before a sales quote or even the feasibility of the project itself could be considered, but would a salesperson know to ask these question? Would they even be incentivized to ask? Would the consultants know that certain questions could make OCR hard or sentiment analysis a wash? Would a statistician be consulted to see if the same or better results could be obtained from simple analysis of GPA, ZIP, and test scores?

I'm sure everybody at Watson is pretty technically competent - and to be sure, I'm sure for most consulting and sales that IBM does, I wouldn't have to make the following qualification. But to be brutally honest, I think the type of people who are familiar enough with AI to be the person you want working at Watson in consulting and sales probably are using those skills as developers and data scientists. And even then, again with all due respect to IBM employees (and I know IBM puts out a lot of great research), those people might not also be at IBM either.


some history on the topic:

at the beginning there was the one true watson. watson was a way to process, correlate and provide indsight on a corpus of knowledge expressed in natural language. the technology was good but had one major weakness: the knowledge extraction part had a large bulk of manual labor needed to weed out the noise from the relevant part, because to a processing engine each bit of information is equal to every other bit of information. so you needed domain expert to proviede an initial tuning and after that watson was a good solution for the problem statement.

this process however required non technical domain expert to be working closely with the watson analysts at the tuning for an unspecified and quite long amount of time, comopounding the already astronomical costs of the solution itself

now as you might imagine like any other company ibm has a lot divisions - cloud, services, intelligence etc. the watson division due the large research costs and the few clients that were able to afford and make use of the tech was scoring too much quarters in red.

ibm is also a financial company, so they did what they usually do when one division needs padding: they started moving everything remotedly related to intelligence and analytics under the watson moniker, to drive up quarterly reports. this had the side effect that the watson marketing is a clusterfuck of overlapping and unrelated solutions that more pften than not don't even work togheter natively, but are presented as a whole ecosistem.

now, of course anyone trying to make sense of the whole thing is going to be faced with all kind of claims against all kind of slutions, without any idea of what does what.

but originally the only omission from marketing was what watson actually required and how and what it could give back to a company. but the problem is... it's the kind of solution that you have to build to see where it goes. you cannot be sure of the results from the beginning.


Same thing Salesforce is doing with Einstein right now. Means that internally, when someone says a customer wants to talk about Einstein, people are all left wondering which one.


This is not ingenious in my opinion ... I’ve been at a company for a year that had bought into “Watson” — when I suggested alternatives to the specific apis being used was told in no uncertain terms that they had been consulted and they were going with “Watson” ...

Now a year later I’m finally being asked to clarify “what is Watson” so that the decision makers can better understand the techniques being used rather than the fantasies about what was being used that they were encouraged to develop through misleading marketing and consultants ...


I remember when Microsoft .NET came out and I hated it because they named it something that had no relation to what it was. The product had nothing to do with the Internet, but the Internet was a big new fad back then and marketing wanted to latch onto that.

Today .NET is a great product ecosystem and a huge success, except for the horribly awkward name.

I don't know if they'll ever regret naming it Watson, but latching onto the AI craze isn't necessarily a losing strategy as long as the products are good and successful. Even if they have nothing to do with AI.


It seems to be a common theme with projects in big companies: somebody comes up with a good project idea and a catchy name. As it gets resources and management attention, other departments re-brand their long-time toy projects with being a substantial part of the 'catchy-name' project's vision. At some point nobody knows anymore what it was all about. 'SDN', 'Cloud', 'Watson', '.NET' are all examples of this.


I attended the IBM Connections conference in Vegas shortly after the Jeopardy! thing and just after IBM started using Watson as a brand under which it lumped a bunch of analytics products. From questions and comments made, during some of the sessions I attended, it became clear that large portion of the attendees (mostly the business people) wrongly assumed that the technology that won Jeopardy! was now being used inside everything labeled with "Watson". People were very excited by this. I never heard anyone from IBM making any attempt to try and rectify this misconception, they just smiled, nodded and played along. I disliked IBM and their corporate marketing BS even more after this.


Totally. Especially because it was on Jeopardy, further reinforcing the idea that it is a single box. Maybe it was then, but that's definitely an impression that's stuck with me since.


I was convinced it was singular when they had a big mainframe sitting there to play He. Today I strongly believe it's simply a brand.


But I bet you that when the singularity arrives, it will be named "Watson".


I briefly worked with a Watson team on a cool idea to map a person's 'knowledge space' (or probable knowledge space given their background) against Watson's knowledge space and guide them to relevant learning materials and journal articles and the like.

The idea was to save people time so they aren't rehashing stuff they know down pat or jumping ahead into material they cannot understand but, instead, find that next step into what they almost know. The idea from there would be to let them specify where they want to go and guide them, step by step, exposure by exposure, to that summit.

In a few days, it turned into Just Another News Article Recommendation Engine based on interest and similar profiles with other clients. Yawn.


In a similar vein, I find a machine beating a grand master fairly boring. Show me a machine that can teach someone to become a master (or even a very good amateur) and we can talk.

That is, don’t beat the game, beat the opponent, understand why, and then model an adaptation strategy for the opponent (teach).


> In a similar vein, I find a machine beating a grand master fairly boring.

I know a lot of people felt that way even before it happened. It feels like an accounting problem rather than intelligence. But, it's fun to remember when it was thought by some very smart people that chess required more accounting than was ever possible by a machine, so beating a grand master would have to be a demonstration of intelligence.

> don’t beat the game, beat the opponent, understand why, and then model an adaptation strategy for the opponent (teach).

Yeah that would be closer. My favorite Turing test, if you will, is whether the AI can tell you you're asking it the wrong question. If Watson got bored of beating grand masters at chess and started refusing to play, maybe then a case could be made it was reasoning.


A lot of AI "milestones" were actually caused by improvements in computing itself, IMO. That is, they weren't facilitated by novel algorithms or engineering solutions specific to the milestone itself. They kind of just happened, either because computers got faster (e.g. disk I/O, ram size, processor speeds and caching) or because someone finally decided to throw enough distributed computation at the problem.

People knew about the main algorithms that Deep Blue used (alpha-beta minimax with heuristic evaluation) for decades before Deep Blue existed. It was simply a matter of waiting until IBM found marketing value in dedicating a top 300 world super computer to beating Gary Kasparov


> Watson got bored of beating grand masters at chess and started refusing to play

That's when you reboot it, if that doesn't fix it good old persuasive maintenance techniques can also come into play


"I'm afraid I can't let you do that, Dave" - HAL


I'm interested in the original idea. Can you expand on how your ideal system would function?


Well, the hope was that Watson, having explored and built a connected knowledge graph from various sources, could ask probing, adaptive questions to find out where a person landed.

So, say I'm an undergrad at a good university and I tell the system "I'm interested in Computer Science. I am particularly interested in Scientific Computing and would like to get to a graduate level of knowledge."

The system might ask "Sort the following operations by their worst-case run-time...". Then, if they do well there, maybe "Which of these two examples of auto-parallelization using Matlab's parfor would fail to parallize the code.." or something like that. Over the course of so many questions, the system would start to paint a more and more reliable picture of where contours of a person's knowledge.

This is time-consuming, of course, but over time it would get easier and faster to find contours by using the 'average' of people with similar backgrounds as a starting point.

Once a fairly good mapping is done of the person to Watson's cognitive model, Watson would need to trace back to the source(s) of nearby concepts and offer them to the user, which, ideally, are then rated by the user for relevance and perceived difficulty to further refine the person's model and rank the material offered for that particular profile.

Now imagine a Grad Student asking a similar question. Or a middle school student. What would those interactions look like? The mappings? The suggestions?

Don't get me wrong...mapping a person's knowledge space is a Very Hard Problem. Watson takes a kitchen sink approach that just isn't possible for a human being. And maybe it wouldn't be possible to tease apart the resulting cognitive model into tidy nodes enough to map anything to. These were questions I'd hoped that IBM could help answer. Instead, it was on to the easy, well-understood problem and solution.


This already exists, it's called adaptive testing. I guess the new part would be modelling somebody's ability, not in one topic, but many related topics.


Sure. Existing adaptive tests were an inspiration for the idea, of course. But putting together a good adaptive test is, itself, very time-consuming. And the test itself doesn't do more than measure ability. The crux of the idea is to use the adaptive test to suggest learning material from a wide swathe of sources to the individual.


This is what Knewton is all about. knewton.com


Knewton is very cool, thanks for the link.

I'm assuming they break things down into traditional objective units within a standard curriculum and material that is relevant.

That's great but it is a very manual process.

Remember Yahoo curated keyword recommendations from the 90s? This Knewton approach is more like that. What I'd like to see is more like a Google Search in this analogy...something more flexible and comprehensive.


The basic idea is that you might start with a manual linkage between concepts, derived from linkages between content developed by subject matter experts, but over time it organically morphs into a knowledge graph that describes how certain concepts relate to and build on each other, and for which scenarios of learners.

That knowledge graph combined with a concrete goal (mastery level) and time to get there (deadline) can then be used to recommend to a specific learner what material to study or activities to do next.

It's theoretically possible to do this even more organically, but you need clearly tagged educational "stuff" (written content, lectures, activities, etc.) and, perhaps more importantly, clear ways to measure outcomes as a result of interacting with that "stuff" (typically quizzes which themselves have been clearly calibrated).


Watson is the IBM marketing department going mad about ways in which IBM can continue to remain relevant in a world that increasingly doesn't care about what hardware a particular computer program runs on.

If there is going to be a 'second AI winter' I fully expect Watson and other such efforts to be the cause.


IBM hasn't been about the hardware for a long time, instead it has been about the consulting services contract. And when we, as a startup, first engaged with the Watson folks it was clearly a sales funnel for their consulting services.

That said, IBM has a tremendous amount of research they have done in AI over the years. It is not that they don't have a lot of interesting technology they can throw at different business problems, it seems like they are having a hard time getting invited to the party if they don't track the same hype buzz that the current ML/AI craze has embraced.


The Watson stuff is so oversold it is almost comical.

And yes, sure IBM hasn't been about hardware for a long time, they've been a services company for decades now. But as far as AI/ML is concerned Google and Facebook are attracting the top talent these days, Apple and Microsoft much further down the line.

What would be nice is if they would take the opposite tack, rather than marketing the hell out of it quietly solve lots of problems that are hard to solve in a traditional way. Every time I hear about Watson it is in the context of something where I ask myself "What's the point of being able to do that?". If all there is to hype is the hype itself then it is hollow.


Yes. IBM should be best positioned among AI-aware companies to augment and extend "deep" knowledge-bases into the enterprise. The information infrastructure of Jeopardy Watson was impressive and ought to open doors for IBM to partner productively with other info management vendors to modernize and advance that corporate infrastructure which is driven by deep information. But instead it appears the short-term ROI-think of their non-technical SVPs is what's led them astray.

IBM continues to make the mistake that the flash bang uber-sexiness of ML (esp deep learning) matters more to the enterprise than deep info management (something which IBM can proffer while most competitors can't). If IBM were smart, they'd leverage their deep experience in databases and IR and promote that side of AI -- smarter info management. IMHO, this could do much more for their bottom line than the stupid pretense that Watson really is HAL 9000.


Here I am doing a reasonably good impression of Watson, by totally missing your point and just adding that we have already had (at least) two AI winters, this would be (at least) the third coming up, if it does.


But nothing for IBM to be concerned about. Their marketing has already moved on to the blockchain…


This would be the third or fourth AI winter in my estimation going back to John McCArthy: https://en.wikipedia.org/wiki/AI_winter

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: