
IBM is not doing "cognitive computing" with Watson (2016) - dirtyaura
http://www.rogerschank.com/fraudulent-claims-made-by-IBM-about-Watson-and-AI
======
ChuckMcM
The author's particular gripe is that the Watson advertisements showing
someone sitting down and talking to "Watson." They bother me as well (and did
so when I was working at IBM in the Watson group) because they portray a
capability that nothing in IBM can provide. Nobody can provide it (again to
the author's point) because dialog systems (those which interact with a user
through conversational speech) don't exist out side specific, tightly
constrained, decision trees (like voice mail or customer support prompts).

If SpaceX were to advertise like that, they would have famous people sitting
in their living room, _on mars_ , and talking about what they liked about the
Martian way of life. In that case I believe that most people would understand
that SpaceX wasn't already hosting people on Mars.

Unfortunately many, many people think that talking to your computer in
actually already possible, they just haven't experienced it yet. Not sure how
we fix that.

~~~
albertgoeswoof
Kinda how Tesla advertises autopilot as a self driving car that's safer than
human drivers?

> Full Self-Driving Hardware on All Cars

> All Tesla vehicles produced in our factory, including Model 3, have the
> hardware needed for full self-driving capability at a safety level
> substantially greater than that of a human driver.

[https://www.tesla.com/autopilot/](https://www.tesla.com/autopilot/)

~~~
delinka
But it’s technically correct: the hardware _is_ present. The software isn’t
there yet, that’s true. So is this a “lie by omission”? Reading the whole
page, there’s this quote: “It is not possible to know exactly when each
element of the functionality described above will be available.” What else
needs to be said?

At some point, critical thinking is required to navigate the world. The
customer should ask “and when will the software catch up?” And I’ll concede
that the overwhelming majority of the population doesn’t think like that.
Should Tesla update the site with statements to confirm the negative cases?
e.g. “Our software can’t yet do X, Y, Z, etc”

~~~
c22
Since we have yet to build a system that is capable of "full self-driving ...
at a safety level substantially greater than that of a human driver" I'm not
even sure they can make the claim that their existing hardware supports this.
They might have high hopes, but there's no way to know what combination of
hardware and software will be necessary until the problem is actually solved.

------
laichzeit0
The most ingenious trick that the IBM marketing department pulled was to get
non-technical (and probably even technical people, judging by this thread) to
think that Watson is some kind of singular thing. Like that it’s a single big
neural network with different APIs on it, or something. I honestly think
that’s what most people think Watson refers to.

Watson is like Google Cloud Platform. It’s just a name for a platform with a
bunch of technologies.

E.g. Watson Natural Language Understanding was previously AlchemyLanguage. It
was just rebranded.

It’s very clever though, I’ll give them that. Use a human name so it has all
the anthropomorphic connotations and let people think it’s some kind of AI
learning things.

~~~
nilkn
I'm not even convinced Watson is a platform. My impression is that it's just a
consulting division of the company that deploys teams to build solutions that
are in some way related to AI, with each solution or implementation
potentially being completely unique from the ground up. Perhaps someone from
IBM can correct me though.

~~~
landonxjames
I'm currently sitting in a meeting about implementing the Watson Enterprise
Search product in my company and that is more or less the impression I've
gotten. They sell it as a platform that is easy to customize and then once
you're in they bill you tons of hours to help you because the system is
indecipherable and poorly documented.

~~~
chaseha
And that, folks, is enterprise software services/consulting in a nutshell!!

------
clavalle
I briefly worked with a Watson team on a cool idea to map a person's
'knowledge space' (or probable knowledge space given their background) against
Watson's knowledge space and guide them to relevant learning materials and
journal articles and the like.

The idea was to save people time so they aren't rehashing stuff they know down
pat or jumping ahead into material they cannot understand but, instead, find
that next step into what they almost know. The idea from there would be to let
them specify where they want to go and guide them, step by step, exposure by
exposure, to that summit.

In a few days, it turned into Just Another News Article Recommendation Engine
based on interest and similar profiles with other clients. Yawn.

~~~
hinkley
In a similar vein, I find a machine beating a grand master fairly boring. Show
me a machine that can teach someone to become a master (or even a very good
amateur) and we can talk.

That is, don’t beat the game, beat the opponent, understand why, and then
model an adaptation strategy for the opponent (teach).

~~~
dahart
> In a similar vein, I find a machine beating a grand master fairly boring.

I know a lot of people felt that way even before it happened. It feels like an
accounting problem rather than intelligence. But, it's fun to remember when it
was thought by some very smart people that chess required more accounting than
was ever possible by a machine, so beating a grand master would have to be a
demonstration of intelligence.

> don’t beat the game, beat the opponent, understand why, and then model an
> adaptation strategy for the opponent (teach).

Yeah that would be closer. My favorite Turing test, if you will, is whether
the AI can tell you you're asking it the wrong question. If Watson got bored
of beating grand masters at chess and started refusing to play, maybe then a
case could be made it was reasoning.

~~~
lerpa
> Watson got bored of beating grand masters at chess and started refusing to
> play

That's when you reboot it, if that doesn't fix it good old persuasive
maintenance techniques can also come into play

~~~
dahart
"I'm afraid I can't let you do that, Dave" \- HAL

------
jacquesm
Watson is the IBM marketing department going mad about ways in which IBM can
continue to remain relevant in a world that increasingly doesn't care about
what hardware a particular computer program runs on.

If there is going to be a 'second AI winter' I fully expect Watson and other
such efforts to be the cause.

~~~
ChuckMcM
IBM hasn't been about the hardware for a long time, instead it has been about
the consulting services contract. And when we, as a startup, first engaged
with the Watson folks it was clearly a sales funnel for their consulting
services.

That said, IBM has a tremendous amount of research they have done in AI over
the years. It is not that they don't have a lot of interesting technology they
can throw at different business problems, it seems like they are having a hard
time getting invited to the party if they don't track the same hype buzz that
the current ML/AI craze has embraced.

~~~
jacquesm
The Watson stuff is so oversold it is almost comical.

And yes, sure IBM hasn't been about hardware for a long time, they've been a
services company for decades now. But as far as AI/ML is concerned Google and
Facebook are attracting the top talent these days, Apple and Microsoft much
further down the line.

What would be nice is if they would take the opposite tack, rather than
marketing the hell out of it quietly solve _lots_ of problems that are hard to
solve in a traditional way. Every time I hear about Watson it is in the
context of something where I ask myself "What's the point of being able to do
that?". If all there is to hype is the hype itself then it is hollow.

------
throwawayWatson
I used to work for IBM, but a few years ago.

One thing about Watson that I remember is this presentation by a _very_ senior
guy. He had just come back from the US and was presenting what he learned
there about Watson Healthcare (IIRC, that's what it was called), which I
assumed was a division of the Watson team that was focused on cancer and stuff
like that.

I'm paraphrasing, but during the presentation he said something like: "The
project was not originally called Watson Healthcare, it was called X (I can't
remember exactly), but potential customers were like 'No, no, leave X, we want
Watson', so we had to change the name to Watson Healthcare for the sake of our
customers. Watson Healthcare actually doesn't have anything to do with
Watson."

I couldn't believe, at the time, how much respect I lost for IBM in about 20s.
First of all, he thought we're idiots. You have to be brain dead in order to
believe that he renamed X to Watson Healthcare in order to help customers.
They just wanted to ride the hype train of the Watson brand and were lying to
everybody about it.

~~~
sgt101
his customers are CIO's they need help in selling things to the board, I guess
that's what he's talking about.

~~~
opportune
A lot of CIOs are surprisingly non-technical themselves.

------
nartz
The way IBM talks about it is completely bs. However, this round of AI is
definitely better than the last one. Specifically, whats different this time
around is that previously, expert based systems and many machine learning
techniques require that you specifically hand code things like:

1\. Parsing and providing the input dataset into 'features'

2\. Hand coding the logic and rules for many different cases (Expert systems)

Now, it has become easier to train a model such as a neural net where you can
provide much 'rawer' data; similarly you just provide it a 'goal' in the form
of a loss function which it tries to optimize over the dataset.

By 'true' AI, I think most people mean 'how a human learns' \- which is
actually a very biased thing, since we humans have goals of things like the
need to survive, etc. I do believe it would be possible to encode these into
goals, although doing that properly and more generically seems a little bit in
the future.

~~~
mcguire
One of the neat parts of expert systems was that you could get surprising and
bizarre interactions between rules, leading to entertainingly nonsensical
answers.

One of the neat parts of neural networks is that you have no idea what rules
it's using, but it still manages to produce answers that are sometimes not
entertainingly nonsensical.

~~~
jacquesm
The bigger problem is that they produce that are most of the time making
perfect sense and then every now and then they spectacularly fail on input
that is indistinguishable from inputs that gave correct answers.

~~~
empath75
I think this also happens in the human brain quite a bit. There’s a lot of
times where you see something out of the corner of your eye that isn’t there,
or you duck because you think something was coming towards you, or you wave
because you think you recognize someone.

I bet at a lower level, various systems in the mind fail constantly but we
have enough redundant error correction to filter it out.

------
xemdetia
Having had some level of access to inside IBM the whole cognitive initiative
has just been this bizarre self-feeding marketing sales escalation where the
real engineering has to 'bring the cognitive' in the most Dilbert pointy-
haired boss sort of way.

------
glup
Dan Klein shared in a graduate NLP class at Berkeley a few years back a AAAI
article on Watson back in 2010 (when it actually was a distinct technology
stack and not just marketing nonsense). At that time IBM was focused on
question answering in Jeopardy. It was pretty clearly incremental rather than
novel— Dan used the example to show that 1) ensemble techniques can be
effective if done properly and 2) hyper parameters matter, a lot 3) there's
human intelligence and then there's Ken Jennings intelligence: looking at
precision and percent answered, he's in his own separate league. It made me
think a lot about individual differences in terms of declarative knowledge.

[https://www.aaai.org/Magazine/Watson/watson.php](https://www.aaai.org/Magazine/Watson/watson.php)

~~~
snarf21
It was also unclear to me when they did the contest as to whether Watson only
had access to the analog audio and/or image of the questions asked. So did
they have to parse the question the same as Ken.

Also, it was clearly optimized for a specific use case. If the questions were
reworded with more clues that were puns or needed inference, I think Ken would
have done about the same but Watson would have faired much more poorly.

~~~
pas
They had the text of the question (the sentence), and they had to parse that
and then the resulting question was then sent through a text to speech engine
obviously, but there was no speech to text.

[https://www.ibm.com/blogs/research/2011/01/how-watson-
sees-h...](https://www.ibm.com/blogs/research/2011/01/how-watson-sees-hears-
and-speaks-to-play-jeopardy/)

> At exactly the moment that the clue is revealed on the game board, a text is
> sent electronically to Watson[...]

------
seibelj
"Machine learning" was a pretty good buzz word, but "Artificial Intelligence"
is even better. And in a way, ML is part of AI so it isn't really lying.

IBM tries to sell into c-suites of companies that are less technically-adept
than the average HN reader. Their marketing seems to be pretty effective, at
least in getting proof of concept projects signed with big names.

Watson is simply IBM's ML product, but they call it AI and wrap it in
marketing for all the reasons every AI startup does the same thing.

~~~
AndyNemmity
I disagree that companies implementing "AI" are less technically-adept than
the average HN reader. This sort of comment has happened on other similar
discussions.

It is untrue. They are very technically adept, with teams of people who are
also aware of their problem spaces, and technology.

I understand there are some cases of lack of technical teams making these sort
of decisions, but it isn't the norm, and even smaller companies often have
incredibly technical teams.

I don't understand where this idea comes from. I've done consulting, and
implementation of these type of projects for most of my life. My experience
says it's false. What is the feeling that this is true?

Is it from people who aren't a part of the process theorizing that some
unknown force must not be as intelligent as they are? Is it from looking at
the decision making in general (why did they buy an ERP?), and making
correlations?

I'm honestly unsure how there's this widespread idea that there aren't
brilliant people everywhere doing the same work they are. Yes there are
problems, and challenges all over the place, but I find I am amazed all across
the country, and world at the level of expertise in companies.

~~~
jiveturkey
> _I disagree that companies implementing "AI" are less technically-adept than
> the average HN reader._

He didn't say that. He said that the c-suite of those companies was less adept
than our fellow readers. This is almost certainly true. I'm not sure that it
matters.

------
chomp
Yep, matches my experience. We invited IBM to our company to pitch Watson,
there was very little that was impressive about it. "Watson" is mostly just a
coding services integration team, who will assign a team to add some basic NLP
to your web services. Someone with a free weekend and a book on TensorFlow or
NLTK can replicate most of what the IBM sales engineers pitch for Watson.

------
abhgh
Oh wow, Roger Schank [1]! Haven't heard that name in a while - he was quite
famous in the early days of AI. I wonder if he has figured out a good way to
marry ML to his theory of Conceptual Dependency (CD) [2] - because that would
could be ground-breaking for hard NLP problems.

Interestingly I started reading the article without paying much attention to
who the author is. A few lines in I began to wonder if this is going to be
unproductive rant, and if the author has heard of things like CD etc ... It
became funny right about then because that's also when I happened to glance at
the URL and saw Schanks name.

[1]
[https://en.wikipedia.org/wiki/Roger_Schank](https://en.wikipedia.org/wiki/Roger_Schank)

[2]
[https://en.wikipedia.org/wiki/Conceptual_dependency_theory](https://en.wikipedia.org/wiki/Conceptual_dependency_theory)

~~~
mark_l_watson
In the 1980s I spent too much time trying to use Conceptual Dependency in a
few small R&D projects. Looked promising, but I had little success with it.

~~~
abhgh
I think one of the challenges with using CD in the real world is the "unclean"
input that need to be mapped to the various primitives and structures of CD,
or stuff in the same vein that came after. Without a way to do that
automatically, in a scalable fashion and with minimal human assistance, its
utility is limited. Which is why I feel that if we had a way to leverage the
current breed of ML techniques to automatically (or even semi-automatically)
define this mapping, it would be a big step forward.

~~~
mark_l_watson
Interesting idea!

------
brundolf
I'd like to make a plug for my company
([http://www.cyc.com](http://www.cyc.com)) whose "AI" is not machine-learning
based, does actual cognition and generalized symbolic reasoning, and lived
through the AI winter of the 80s. We've gotten some contracts as a direct
result of companies being disenchanted with Watson's capabilities.

~~~
mark_l_watson
I would like to ask you a question: over the years I experimented quite a bit
with OpenCyc that you stopped distributing last year. Is ResearchCyc
reasonable to experiment with on a small server of powerful laptop? Is an OWL
version available?

~~~
_bxg1
OWL is not available, but ResearchCyc will definitely run on a laptop

------
samfriedman
OT from the headline, but I take issue with the author's claim that Bob
Dylan's work doesn't relate to the theme "love fades". Dylan has had a vast
career beyond his protest song days, and I'd argue that one of his best
albums, "Blood on the Tracks" would be accurately summed up as "love fades".

~~~
jpttsn
I agree; OP is one of those articles, that start to make a reasonable point
only to frame it with a highly debatable specific example.

~~~
randcraw
Dylan didn't win the Nobel for his songs about faded love, and he won't be
remembered for them especially.

Schank is 72 years old. He came to know Dylan when when his protest lyrics
gave voice to 20-somethings like Schank in the 60's. Like the rest of us,
Schank likely remembers Dylan less for his retrospective years thereafter.

~~~
xamuel
What you've called Dylan's "retrospective" years include an earth-shattering
comeback with three top-charting albums in a row starting with 1997's "Time
Out Of Mind" (which, incidentally, has quite a few 'love fades' lyrics). Dylan
himself has singled out "Time Out Of Mind" as the only one of his own records
that he himself goes back and listens to.

------
zerotolerance
Dear engineers, merit is useless when you're trying to sell something.
Authority is king, and people remember emotion and hyperbole. Marketing and
sales is almost always about representing authority regardless of merit. The
only thing that matters after a Watson sale is if Watson can help solve the
problems the customers have.

~~~
vertexFarm
True, it's well known that engineers don't often have appreciation for the
emotionality of marketing. But there's a limit here. We can't just have no-
holds-barred hyperbole and outright lying to the point of unfairly deceiving
customers.

Obviously it takes a legal professional to judge where that line falls, but
for something as specialized as this it's hard for laypeople to appreciate the
distinction between stretching the truth with enthusiastic self-promotion and
full-on false advertising. It's interesting to think about.

~~~
zerotolerance
Well, it might be argued that with the exception of this author (who
apparently coined the terms they're using) neither the IBM marketers, nor any
of the potential consumers understand what these words mean. It is pure
abstraction to most people. If the marketers were pressed, I'd guess that
they'd just define these terms differently than the people making claims of
fraud.

I'm imagining the glossy eyes of a judge or jury trying to grasp the nuance
and just throwing in the towel.

------
Dryken
Anyway none of the company that pretend doing AI are actually doing AI. AI
nowadays is pure branding bullshit.

~~~
flamtap
I heard someone say that A.I. is just what we call technology that doesn't
work yet. Once it works, we give it a specific name, like "natural speech
recognition".

~~~
goatlover
However, if a robot from scifi were to walk out of the lab, like Data or Ava
from Ex Machina, or we had access to HAL or Samantha from Her, we wouldn't
just give it a specific technical name. We would consider those to be genuine
AIs, in that they exhibit human-level cognitive abilities in the generalized
sense.

It's true that in Her, Samantha was just an OS at the start, kind of like how
the holographic doctor was just a hologram at the beginning of Voyager, but as
both stories progress, it becomes clear they are more than that. By the end of
Her, Samntha and the other OSes have clearly surpassed human intelligence.

Those are fictional examples, but they illustrate what we would consider to be
genuine artificial intelligence and not just NLP or ML. The reason people
always downplay current AI is because it's always limited and narrow, and not
on the level of human general intelligence, like fictional AIs are.

------
fixermark
"This was about promoting expert systems. Where are they now?"

In 2017, Intuit, Inc., owners of Quicken, posted revenue of about $5 billion.

Not too shabby.

~~~
ballenf
As someone who is currently working on successful commercial product with a
strong a expert system component, I agree with the sentiment. The funny thing
about this project is that the product owners nor marketers and not even the
coders ever use the term "expert system". It just doesn't sell any licenses
nor garner any attention.

My view is that expert systems are a ubiquitous part of many products to the
degree it's hard to even recognize them as such. They're not the main focus of
anyone's marketing budget, because that makes about as much sense as promoting
your "revolutionary axle technology" to sell a car.

~~~
wodenokoto
A bit off topic, but I've been wondering about what expert systems are a lot
lately, and I hope you don't mind me asking a few things about them.

I started studying machine learning long after statistical models was the
absolute standard, and all I really know about expert system, are passing
phrases in textbooks about how the world has moved away from it.

How does one go about building one? MNIST character recognition is often
called the "Hello world of machine learning" ... what is the "Hello world of
expert systems?"

Is there a modern term for Expert Systems?

------
InTheArena
It's a pretty open secret in the community that what IBM pitches that Watson
can do, versus what it (or any state of the art system) really does is pretty
much bunk. This author calls it fraud, but a more charitable interpretation
would be extreme marketing. We've seen a lot of failures with Watson,
particularly in the medical space - MD Anderson's Cancer work for example
([https://www.forbes.com/sites/matthewherper/2017/02/19/md-
and...](https://www.forbes.com/sites/matthewherper/2017/02/19/md-anderson-
benches-ibm-watson-in-setback-for-artificial-intelligence-in-
medicine/#2533023e3774)) where MD Anderson payed around $40 million (on a
original contract deal somewhere near $4 million) and eventually abandoned it.

I do think Watson may be a fake it until you make it thing - in particular,
they still have access to a incredible amount of data, and data determines
destiny on a lot of AI.

~~~
genofon
but their message is for outside the community, so I don't think they deserve
any charitable interpretation, they lost this privilege a long time ago. I
have to say I'm biased, I wish I can see them disappear soon, but I think I
have the right motivations.

------
zmmmmm
Had IBM sales people present on Watson as a security solution recently. The
stench of BS was so bad I nearly had to leave the room. It wouldn't bother me
if they kept things generic, but they deliberately sprinkle the presentations
with specific terms referencing hyped technology (deep learning, etc), with
the clear objective of deceiving the audience into thinking they are using
those technologies when they clearly aren't. It was unethical IMHO.

~~~
dx034
Did your company still buy it?

~~~
zmmmmm
No, we aren't quite the target market yet, but I can see the ground is being
prepared for when we might be.

------
raincom
Chomsky calls it a sophisticated form of computational behaviorism. Just like
the research program of behaviorism died out, this will eventually too. There
are other respectable criticisms of AI, like Hubert Dreyfus' 'What computers
can't do'.

Neither Chomsky nor Dreyfus claimed that machine learning and/or AI won't
solve any problems, but rather that the kind of problems these solve are not
relevant in terms of aspiring to be humans.

~~~
spearmunkie
Chomsky's "Where AI went wrong" is often ignored by the mainstream AI
community or dismissed. Peter Norvig's retort was poorly constructed, and
showed that he didn't comprehend Chomsky's argument.

Machine Learning and AI are stuck in a rut and apparently these so called
ML/AI experts know better. The Deep Learning (along with ML) craze is a
hindrance to a true scientific theory of intelligence.

------
wintorez
I'm no expert in the field of AI or Machine Learning, so I have a question for
the experts here? Has there been any theoretical breakthrough in the AI in the
recent years? I knew we had neural networks and different types of
classifiers, etc. for more than a few of decades now, so apart from better
marketing, has there been a significant breakthrough that explains this sudden
surge in the interest in AI?

~~~
vishvananda
There have been a number of breakthroughs in specific fields recently. In fact
it seems like there is a paper every week that pushes the state of the art
forward in some branch of AI/ML. I think the big one that triggered the
current excitement was the success of convolutional networks in image
recognition tasks. You can read about that one (in 2012) here:
[https://www.technologyreview.com/s/530561/the-
revolutionary-...](https://www.technologyreview.com/s/530561/the-
revolutionary-technique-that-quietly-changed-machine-vision-forever/)

EDIT: This paper refers to the algorithm as SuperVision which was the team
name, but it is more commonly called AlexNet. Here is another article
discussing it:

[https://qz.com/1034972/the-data-that-changed-the-
direction-o...](https://qz.com/1034972/the-data-that-changed-the-direction-of-
ai-research-and-possibly-the-world/)

~~~
wintorez
Thank you. The very answer I was looking for.

------
fallingfrog
I saw a demo of Watson a couple years ago at a trade show and was not super
impressed. Looked like a glorified Markov chain to me.

------
megaman22
Everything IBM says about Watson should be taken with a few tons of salt.

I don't know how much I'm allowed to say, but they couldn't even get it to
work acceptably internally for some of the basic datamining and natural
language processing that are among the things so highly touted in some of
their TV advertising. This is with a gargantuan dataset compiled from years of
relevant interactions to train on in the particular area of interest.

~~~
tabtab
Re: _Everything IBM says about Watson should be taken with a few tons of
salt._

How is this different from nearly all _other_ "AI" companies? IBM is not the
only one guilty of hype. Neural nets can do specific things well, such as
mirror a specific training set well, but still have major gaps in terms of
what many call "common sense". It does smell like AI Bubble 2.0 brewing out
there.

------
EastLondonCoder
It’s an unusually beautiful written article, well worth the read just for the
prose. As for the main sentiment that we have a new AI winter, I’m not so
sure. My lay person view is that we see quite a lot of commercial success with
these systems so the current wave will be well funded for at least a decade.

~~~
xevb3k
What are the big successes? Speech recognition? Which still seems rather bad
to me.

Language translation (which for the languages I’m interested in (Japanese) is
still almost totally unusable?

Self driving cars? Which are not yet in production (and where the social
issues are probably far harder than the technical ones, and likely have been
since the 90s).

Is there some big application if ML that I’m missing that is a clear win?

~~~
fixermark
Amazon's system for finding related products is eerily accurate. The more you
buy, the better it gets at anticipating what you want to buy.

... but the larger wins they've had are behind the scenes, in the predictive
modeling that helps them get product into warehouses in front of demand spikes
(and into geographically-relevant warehouses to satisfy the spikes).

~~~
Analemma_
> Amazon's system for finding related products is eerily accurate.

Amazon's system for finding related products fucking sucks. After I buy an X,
I see ads for more X for months, which is completely useless. Sometimes I'll
see ads for the exact thing I bought... what?!

~~~
cryptoz
Yep, I see this all the time too. Buy a printer? Amazon thinks you want to buy
a new printer every day for months.

Google too. I bought a Pixel 2 online, from Google, signed in to Google. Now I
see ads on Google ad networks all day long to buy a Pixel 2. It has been 5
months of daily ads from Google to buy the phone I just bought. Ridiculous.

Ad networks are a clear financial win for AI - but they also show the
ridiculousness and are clear windows into the failures so far.

~~~
ModernMech
This. For all the talk about how Facebook is mining data to hyper-target
customers, it still seems like they're still trying to throw stuff at the wall
to see if it sticks.

------
plaidfuji
> People learn from conversation and Google can’t have one. It can pretend to
> have one using Siri but really those conversations tend to get tiresome when
> you are past asking about where to eat.

At first blush he sounds like my technologically semi-literate grandma who
would definitely conflate Siri and Google as being part of the same grand
internet program. I had to read this twice to understand that in saying "it
can pretend to have one using Siri", he meant that asking Siri a question
sometimes redirects to a Google search, but wrote it in a way that personified
Google as the actor with intent in that transaction. What an odd and paradigm-
breaking way to look at that.

------
crsv
I feel like they've moved on from their lies about Watson's capabilities to
lies about their capabilities with blockchain technology.

------
kevinSuttle
The problem is that even internally, there is this notion of "sprinkling a
little Watson to do the hard jobs" and then 'poof': problem solved. Marketing
reflects internally, too.

------
eeks
Can an mod add the date to the title? This piece is from 2015.

------
baxtr
AI is definitely starting to enter the “through of disillusionment” in its
hype cycle.

~~~
cryptoz
Is it really? I don't see that at all. Rather I see AI as finally being
entrenched in normal, everyday products and services. AI is here to stay and
the hype hasn't even started yet, at least not compared to what's coming.

Billions of people own devices that they can talk to, that can talk back, that
can translate between more languages than humans can, that know facts about
you that you didn't even know yourself, etc.

Basic AI has come to be an expectation of many consumer products now. And
_real_ AI is coming faster than ever.

Fake audio and fake video, generated from computers/AI is here. Self-driving
cars may be just a domain-specific expertise, but it is still AI by any
traditional definition.

AI is not reaching any kind of trough of disillusionment that I can tell.
We're still obviously just getting started with what can be done.

</AI hype post.>

~~~
nkassis
While there are domains where AI is having successes, the current general
expectation is far ahead of where the technology is. People think (and in part
due to the marketing like IBM is putting out) that ML/AI can do things that
aren't possible yet.

There may be a reset in expectation soon which will lead to become more
pessimistic at claims being made and the marketing. Once we are through that,
it will be easier to get people to understand what use cases align with the
available technology and implementing ML/AI will become more productive. Aka
the Trough of Disillusionment followed by the plateau of productivity.

------
jonjojr
I would only refer to Episode 4-5 of Silicon Valley in Season 5.

This will be the mistake we will make when we introduce a technology we think
it is smart enough to make decision for us and turns out all it does is read
words faster than us and interpret them literally.

Even with the closing statement of "AI winter is coming soon." I can see
Watson having a problem understanding that statement even with context.

------
sosuke
Something feels wrong about the assessment of the second block of text being
written by a human.

I feel some conflict in my head that someone would talk about Bob Dylan as
"overstepping" a claim about his prominence and then conceding that he does
belong in a "Top 10 Bob Dylan Protest Songs list." Of course he belongs in a
Bob Dylan list he is Bob Dylan.

Does that sound human?

~~~
jakeinspace
I think the text is pulled from a "Top 10 Bob Dylan Protest Songs Lost"
article.

------
bobthechef
Watson's marketing is obnoxious, but it's not just Watson. There is plenty of
bullshit, ignorance, and pseudo-intellectualism to go around. Mind you, many
of the technical fruits of AI itself, properly understood, are not bullshit
(the name "AI" is misleading IMO; I wouldn't be able to tell you what
distinguishes AI from non-AI because it seems largely a matter of convention
rather than a substantive difference). The field offers plenty of useful
techniques for mechanizing things people have had to do preiously. However,
the very idea of a "thinking computer" is unjustifiable and superstitious.
There's too much sloppy, superficial thinking.

The author of the article mentions concepts and indicates a distinction
between them and word counting. Certainly, there is a difference between word
counting and conceptualization, and it is patently obvious computers don't do
the latter. But it's worse than that. Technically, computers aren't even
counting words. They aren't even counting, nor do they have any concept of a
word (we count words by first knowing what it is that we should be counting,
i.e., words). What we call word counting when a computer does it is a process
which produces, only incidentally, a final machine configuation that, if read
by a human being, corresponds to a number. The algorithm is a proxy for actual
counting. It is a process produced by thinking humans to produce an effect
that can consistently be interpreted as the number of words (tokens) in a
string. That's not thinking. There is zero semantic content and zero
comprehension in that process, and no number of tortured metaphors or twisted
definitions can change that. AI, as it becomes more sophisticated, is at best
a composition of processes of the same essential nature. No degree of
composition -- no matter how sophisticated or complex -- magically produces
thought anymore than taking sums of ever more composed and expansive sequences
of integers ever gets you the square root of two. It's not a mystery.

------
gaius
It’s not just sleazy advertising, the money IBM has taken from cancer research
for snake oil is downright fraud in my book

------
mankash666
Advertising != Peer reviewed publication. Drinking Coke doesn't make you sexy
and desirable, as suggested by their ads, just makes you gassy.

IBM is allowed some creative liberties in their mass-media advertising
campaigns.

------
jimrandomh
IBM's customers probably understand that Watson (the jeopardy-playing bot)
isn't really relevant, and that what they're buying isn't a pre-written
software package so much as software consulting services. But there's still a
serious problem, which is that a customer of Watson would reasonably believe
that they're getting the team of engineers that solved Jeopardy. In reality
there is no overlap whatsoever in personnel between Watson-the-PR-project and
Watson-the-thing-you-hire.

~~~
huhlig
Watson Services are very much a product you buy/subscribe to. Just like the
Machine Learning services you get from Microsoft or Google. You can use them
pretty easily with absolutely no consulting at all.

------
Zigurd
Start by asking "What is Watson for?"

Watson is for helping decisions at large corporate customers: The CEO has
heard of Watson. He saw it on a screen in the VIP tent at the golf tournament,
and thinks it's neat. The CIO feels safe with "Watson" in an RFP response from
IBM because the CEO thinks it's neat. IBM is happy with this pettifoggery
because it keeps the SOW vague and open to maximizing revenue from the
project.

It's not about AI.

------
mwexler
I always feel that this is one step away from that famous quote, "The greatest
trick the Devil ever pulled was convincing the world he didn’t exist,"
attributed to various
([https://quoteinvestigator.com/2018/03/20/devil/](https://quoteinvestigator.com/2018/03/20/devil/)).
In this case, the greatest trick is convincing that it does exist... and maybe
is the harder one.

------
sriku
I don't have gripes with this marketing approach and I read it more as "expect
to be surprised by what is possible" rather than harp on what isn't .. at
least not yet. For a comparison, it is like apple branding their display tech
as "retina display" to communicate the intention (you can't tell pixels
apart), possibilities (you can now use any font) and quality rather than any
claims about mimicking the eye.

------
mathattack
“These guys are a fraud. Come look a thing my online Academy. Call for the
price.”

Roger Schank used to be a serious researcher. Also tied to consulting firms
like Accenture.

------
thomasedwards
I think the biggest concern is that the general public outside of this
industry think that AI and machine learning is Hey Google not understanding
them and their bank working out you’ve run out of money – which they already
know. When AI _actually_ arrives, they’ll be bored, ignore it, and then, well,
I guess it’ll know and take over the world. We’re all doomed.

------
urmish
>A point of view helps too. What is Watson’s view on ISIS for example?

>Dumb question? Actual thinking entities have a point of view about ISIS. Dogs
don’t but Watson isn't as smart as a dog either. (The dog knows how to get my
attention for example.)

Ouch. But at the same time, for something that didn't grab his attention he
sure had a lot of words to say about it.

------
braindongle
>...counting words, which is what data analytics and machine learning are
really all about

The piece is welcome anti-hype, but, what? How can a true expert in the field
say something like this? Or, maybe I should tell my colleague who is working
on ML for diagnostic radiology to think of voxels as, uh, words?

------
jiveturkey
> _I started a company called Cognitive Systems in 1981._

Ahh. So just from the article, his gripe is of the “begs the question” sort —
he’s not pleased with the evolution of idiom. Since he was doing “real AI”
back then, who are these frauds to claim they are doing AI?

His point may or may not be valid, but his specific argument is quite weak. He
notes that even _a person_ , an actual intelligence, wouldn’t know what Dylan
was singing about without context. He goes on to presume that Watson doesn’t
have context, but who’s to say? Watson could certainly read all the articles
about Dylan that he so helpfully cites, and come to “understand” the songs.
And maybe Watson has.

If you follow the links to his academy, you divine a bit more of the
motivation.

His software development course provides “a unique automated mentor, employing
natural-language processing technology derived from our decades of artificial
intelligence research”. He is desperate to stay away from calling it actual
AI, yet can’t resist implying that it is. This is probably most irksome to
him.

My advice: sometimes you have to join ‘em.

~~~
danpalmer
He specifically addresses that he _wasn’t_ doing “real AI” then, and given
they’re doing fundamentally the same or similar things, IBM aren’t now either.

~~~
jiveturkey
Yes, that is my point (sorry that I mis-stated it a bit). He is offended by
the claims of these charlatans when he knows the technology hasn't advanced to
the point of being AI. How dare they claim otherwise.

Also, he knows that they know it's a lie. Unforgivable.

But he's fixated on the literal (once-true) meaning of "AI". It doesn't mean
that anymore, not to the lay person and not to the average technologist
either.

Like "begs the question", one has to just get over it.

~~~
xfer
Not to the lay person? Really? Do you have any evidence to back it up? Hint:
HN is not full of lay persons.

------
intrasight
I remember seeing "Watson" mentioned in the news like five years ago, but
besides a couple HN threads, I've not seen it mentioned since then. Am I
missing something (besides TV, which I don't watch)?

------
davidsawyer
Here's a great video that covers "AI winters" for those who are curious:
[https://vimeo.com/170189199](https://vimeo.com/170189199)

------
tjpnz
Isn't it more or less common knowledge now that Watson is all marketing buzz?
The Watson that IBM is selling CIOs on is a very different thing from what was
seen on Jeopardy.

------
acobster
We won't know how to build machines that understand until we know what
understanding actually _is_ at a biological level. I'm not convinced that we
do.

~~~
cicero
I am not even convinced that understanding happens at a biological level. I
think there is still a lot that can be done to model aspects of human
reasoning in software to produce useful results, especially if it is married
with machine learning, but I don't think we will get there by looking at
biology.

------
dang
Discussed at the time:
[https://news.ycombinator.com/item?id=11751267](https://news.ycombinator.com/item?id=11751267).

------
daveheq
"People learn from conversation and Google can’t have one. It can pretend to
have one using Siri"... Google doesn't use Siri.

------
theschreon
"Search is all well and good when we are counting words, which is what data
analytics and machine learning are really all about."

There are machine learning models which go far beyond counting words, for
example see
[https://arxiv.org/abs/1502.01710](https://arxiv.org/abs/1502.01710)

~~~
wodenokoto
Parent links to Yann LeCunns's "Text understanding from Scratch" paper, from
2015, where the authors uses a conv-net, originally build for image
recognition to do text categorisation.

The NN techniques falls squarely in the "counting words"-bracket, although
this one is actually counting characters.

It is a great paper, with great results, but none of those models therein have
an opinion on ISIS, an ability to converse or anything the author of TFA calls
cognition.

------
wizardhat
I do agree that Watson seems oversold, but the evidence in this article
(Watson's shallow opinion of Bob Dylan, compared to the author's opinion of
Bob Dylan) seems kind of weak. I was hoping for some insider information on
the implementation of Watson, but unfortunately there is none.

------
jameslin
The day AI understands my dirty jokes, it's the day I call it cognitive.

------
gfnord
It's just advertising.

~~~
SahAssar
False advertising. Is that not worth calling out?

------
dwighttk
(2016)

~~~
fixermark
"©2018 Roger Schank" at the bottom? Unless that's just an auto-generated
copyright output that updates to the current year.

~~~
dwighttk
that's just auto generated... I saved the bookmark in 2016... could be even
earlier. I don't see any dates. Wayback machine has its first saved copy in
May 2016 which matches my bookmark.

[http://web.archive.org/web/20160523070729/http://www.rogersc...](http://web.archive.org/web/20160523070729/http://www.rogerschank.com/fraudulent-
claims-made-by-IBM-about-Watson-and-AI)

------
consultSKI
Methinks voice will in fact win.

------
DoctorOetker
Normally my comments are very sceptic, but this article is just spot on.

The problem is not only context or subtext, it is even worse from an entirely
predictable standpoint:

Consider a large corpus of text (books, articles, ...).

1) Concepts that are WELL UNDERSTOOD BY HUMANS will not be explained by humans
when they reference them: the word or concept "pet" (as a verb) will show up
in many sentences refering to the petting of cats, dogs, horses,... and ML
will correctly predict the conjunction of "pet" with any of these words in the
sentence. It will even be able to train ML to confabulate realistic sentences
in the sense of echolalia. Consider then the sentence:

"The lady pets the cat"

The computer will recognize that the presence of cat is not surprising
(bingo!). The computer will have no idea that this probably involves one or
more cycles of the lady's hand _gently_ pressing down on the fur belonging to
the cat, then while still preessing down, moving the hand in the 'natural
direction of the hairs' (NOT the other way around) probably from closer to the
head towards the tail, and probably lifting the hand before repeating the
cycle so as to massage the cat or perhaps so as to remind the cat of its time
as a kitten being licked by the mother cat.

No book or conversation in the corpus will give this detailed description
exactly because humans expect each other to understand this.

2) Concepts that are POORLY UNDERSTOOD BY HUMANS will be vigorously (but often
erroneously) explained by humans communicating to each other what they think
is going on: endless texts about religion, sexuality, perpetuum mobiles,
economy, ...

How do we even expect the computer to produce a sane result, even if it
correctly guesses the context?

That said, I do believe relatively helpful natural language processors to be
possible, but they will have to be vigorously trained by multiple human
curators individually analyzing a sentence and trying to find (probably true)
statements about what a sentence implies:

Starting again with:

"The lady pets the cat"

One curator might mention hands touching fur while moving.

Another curator might add that one can also conclude the hand probably presses
down on the fur.

Yet another notes the sentence also implies the lady is still alive, for else
she would not be able to pet.

The first curator now adds that the cat as well is probably alive, for else
the lady would probably not want to pet the cat, since massaging is useless to
a dead cat.

The third curator now mentions that the sentence implies one or more cycles of
an individual pet stroke.

Etc...

As you can see this quickly becomes an expensive operation.

Now one might train an adversarial neural network to look at a sentence (or
sentence with context) and a list of probable conclusions, to predict if the
list of valid conclusions is complete or incomplete. And then only send the
incomplete ones to humans?

------
itp
What is the point of changing the headline of an article like this? The
headline was an accurate summary of the contents ("THE FRAUDULENT CLAIMS MADE
BY IBM ABOUT WATSON AND AI"). Maybe you agree, maybe you don't. But the
current headline is just wrong.

From the article:

> I will say it clearly: Watson is a fraud. I am not saying that it can’t
> crunch words, and there may well be value in that to some people. But the
> ads are fraudulent.

That's what this is about. Not "Claims made by IBM about Watson and AI."

~~~
teach
The HN submission guidelines include the following rule:

"Please use the original title, unless it is misleading or linkbait."

The original headline is a bit sensational/clickbaity. That's a judgment call
in this case, IMO, but I suspect that's why the title was changed.

~~~
astro_robot
I wish click bait was more well defined. In this case, I feel the original
title summarizes the entire claim made in the article. Using the title "Claims
made by..." sounds like it'll be an article summarizing the claims made by AI
and Watson, rather than a push back on those claims.

~~~
dqpb
I'll go ahead and define clickbait using a simplified and possibly
anthropomorphized version of information theory.

First:

\- The more improbable a message is, the more information it contains,
assuming the message is true

\- If the message is untrue it contains no information

\- If a message is already known by everyone it contains no information

Not Clickbait:

If a message is surprising (seemingly unlikely or previously unknown), and
true, then it contains high information, and will be very likely to be
clicked. This is not just a good thing, it's the most optimal thing!

Clickbait:

If a message is surprising and untrue, it will also very likely be clicked if
the user cannot easily determine that the message is untrue. The user may then
be disappointed when they discover the message actually contained no
information because it was false. Incidentally, false messages will always
have a high probability of appearing to be high information messages, because
they will often appear to have the least likelihood.

Incidentally, this is (in my opinion) the theoretical problem of fake news. It
will always appear to be high information to those unable to determine if it's
true or false. In other words, it will appear to be of the highest value, when
really it has no value (or even negative value if you look at the system level
rather than just information level).

------
metabaudoom
It's not new! IBM does what it's best at which is advertising.

------
dmccrevan
Machine Learning in a lot of ways is legitimate, but its applications to a lot
of NLP / chat-bot technologies is far from cognitive computing.

------
deisner
"Recently they ran an ad featuring Bob Dylan which made laugh, or would have,
if had made not me so angry." Wait, is Roger Schank an AI trying to convince
humans that AIs are impotent and harmless? Pretty sneaky, Schank-bot.

------
chisleu
I worked on data systems that fed weather data into Watson.

IBM's technology, and IBM's marketing are very different beasts. The marketing
is somewhat trivial, but the reality of ML and the Watson data systems is
incredible. They are pumping a huge amount of data into it and they have data
scientists doing incredible things with the data already. It is the largest
growing segment of the company (and maybe the only growing segment of the
company.)

The marketing of AI, and the realities of ML are always going to be
disconnected. Sure, you will likely just get an email that says routine
maintenance has been ordered on your elevator, not a box in the corner that
tells you in a sexy, clear, non-robotic voice. As for Bob Dylan's songs...
Christ. AI winter isn't coming unless the term AI gets squashed and people
start calling it ML. This article is pretty FUD.

~~~
jbob2000
The problem is that all of this AI and ML is being used to target Ads better.
You don't need AI and ML to do this, it is a solution looking for a problem.

Your example about routine elevator maintenance does't need AI at all. Whoever
logs the elevator maintenance can just press a button to send a notification
message. Hell, it can be wired up such that when you submit a notification
notice, it automatically sends an email. They probably have condo management
software that does this already.

~~~
kthejoker2
Speaking for OP - I think the idea was that ML would determine that your
elevator was in need of maintenance and place the order without human
intervention.

Although the word "routine" belies that, but so.

~~~
jbob2000
Exactly. We already know when the elevator needs to be maintained. If it isn't
being maintained, it's not because people don't know, it's because they are
intentionally NOT doing the maintenance, maybe because it costs $120/hr
minimum 6 hours.

------
fwdpropaganda
Question: how can the author know what Watson really is doing unless the
author worked on Watson?

If you're going to try and explain the above to me, don't do it by explaining
what Watson "really" is doing (unless you worked on Watson yourself). Explain
exactly how you can tell the difference from the outside.

As far as I can tell all the author did was point out that Watson made a
mistake (saying that Dylan's songs is about love fading) and this is not
enough, since humans make mistakes too.

~~~
abhgh
I am not sure if the author did this (he certainly doesn't mention it), but a
lot of what constituted Watson at the time of Jeopardy was published as a set
of 17 papers called "This is Watson" in the IBM Journal Of Research and
Development.

Other than that too, I think you can make a reasonable assessment of any NLP
system at this level of abstraction by looking at what is cutting edge in NLP
today (by following conference proceedings etc).

