
IBM is not doing "cognitive computing" with Watson - Jerry2
http://www.rogerschank.com/fraudulent-claims-made-by-IBM-about-Watson-and-AI
======
pesenti
If you want to figure out what Watson can do and bypass all the marketing
hype, you can just try all the services available at
[http://ibm.com/watsondevelopercloud](http://ibm.com/watsondevelopercloud).

I won't argue that the PR goes often too far, and that's a big debate we have
internally (I work for Watson). But it's a pity that most of the negative
opinions expressed here come from people who haven't even bothered to try any
of the services we put out there or read any of the scientific papers that
have been published by our team.

~~~
guiambros
Coincidentally I spent the last couple of hours playing with Watson's services
(I was looking for a decent Speech-to-Text API for a toy project).

Marketing aside, I must say that BlueMix's user interface is the worst. thing.
ever. Buggy, extremely slow, fragmented, with cryptic error messages[1]. Took
me 45+ minutes to reactivate my account (the old one was kindly _deleted_ due
to inactivity), and create the the TTS/STT services endpoints.

Funny; after all the investment to develop Watson, you'd think UI would be the
easiest part to get it right.

On the positive side, the TTS and STT APIs are simply a pleasure to work with.
The documentation is excellent, accuracy is pretty good, and the demos are
spot on. Plus you have support for streaming audio through WebSocket for STT
(which is a must for my project), and a few voices to chose from for TTS.

[1] [http://imgur.com/3s4KUPv](http://imgur.com/3s4KUPv)

~~~
pesenti
We are very aware of the Bluemix usability issues and are working with that
team to address them. Did you try the new Bluemix by any chance?

Thanks for the kind comments on our APIs. We are really trying hard to make
them usable.

~~~
guiambros
I tested the new UI yesterday, and initially thought it was equally bad: too
slow to be usable, couldn't figure out where things were, etc.

But I just tested again today, and the slowness is gone, so maybe it was an
isolated incident. I was able to play with it a bit more, add services, create
a few containers, etc. Definitely an improvement versus the classic interface.

I'm still getting used to the logic grouping and how to access your services
(I'm coming from years of AWS / Google Cloud), but the ability to go back-and-
forth quickly helps a lot.

Minor nitpick: any reason for API icon to be pink instead of blue, like all
others? I keep looking at it as if it were a different state (e.g.,
"activated").

------
nl
Meh.

I'm no fan of Watson-the-marketing-term, but this sounds like the bitter
remarks of a symbolic AI defender who is so sure that _their_ way of doing AI
is the only way that anything else is fraud.

Watson-the-Jeopardy-winner did "cognition" (which he implies means following
chains of logical reasoning) as well as any other system that has been built.

See for example "Structured data and inference in DeepQA" or "Fact-based
question decomposition in DeepQA[2].

It's true that the the Watson image analysis services don't use this. I'm
guessing that is because they don't actually work very well in that domain.

[1]
[http://ieeexplore.ieee.org/Xplore/defdeny.jsp?url=http%3A%2F...](http://ieeexplore.ieee.org/Xplore/defdeny.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fstamp%2Fstamp.jsp%3Ftp%3D%26arnumber%3D6177725%26userType%3Dinst&denyReason=-133&arnumber=6177725&productsMatched=null&userType=inst)

[2]
[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6177...](http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6177726&abstractAccess=no&userType=inst)

~~~
Ensorceled
Weird, I thought he was outraged because "cognition" was being co-opted by IBM
marketing and it's actual abilities being mis-represented.

He wasn't saying "Watson is crap", he was saying "Watson isn't doing
cognition, stop saying it does"

~~~
nl
Right, but I think he has co-opted the term "cognition" to mean only what he
thinks it means. I don't believe that is true, and even if it does then Watson
has done that as well as any other system that has been built.

------
rm999
This article mirrors some huge frustrations I've had in recent years as a
long-time lover and pusher of machine learning. I spoke to a Watson booth
employee for a few minutes at a machine learning conference a couple years
ago, and almost right away had similar feelings. I don't like the term 'fraud'
here though, 'insanely oversold' seems more appropriate. I looked more into
Watson, and realized it's really just a large number of traditional (and not
very innovative) machine learning algorithms wrapped into a platform with a
huge marketing budget.

>AI winter is coming soon.

Perhaps, if so Watson is certainly evidence of this. It frustrates me to no
end that machine learning has so much potential but is often lost in a sea of
noise and buzz words (as much as I love deep learning, I'm almost tempted to
lump that in here too given its outsized media coverage). Machine learning is
in its infancy of impact, but the overselling by mediocre enterprise companies
and ignorant press may shoot its credibility for years to come.

~~~
mark_l_watson
Machine learning is doing fine, please don't worry about that!

My wife was trying to find an old photo tonight and all I had to do was
suggest that she go to google photos and search based on description. Same
thing with Facebook recognizing images.

I am getting into using deep learning for NLP, and that looks promising.
Google's parser is interesting but I want to see how quickly they can get many
layer networks to do things like anaphora resolution (match up pronouns with
previous noun phrases).

~~~
ktRolster
Better parsers are interesting, but I feel like the parsers we have already
are 'good enough.'

The biggest hole in NLP is going from a parsed sentence to 'meaning.' Once we
get that right, and are able to feed meaning back into the parser, then all
our parsing problems will be fixed imo.

~~~
badlogic
The parsers available are good enough for English. Sadly, that's absolutely
not true for other languages.

~~~
realusername
As a French with a strong accent, Siri probably understands only half of what
I say and I have to make lots of efforts.

------
mark_l_watson
I spent a lot of time in the 1980s experimenting with Roger Schank's and Chris
Reisbeck's Conceptual Dependency Theory, a theory that is not much thought of
anymore but at the time I thought it was a good notation for encoding
knowledge, like case based reasoning.

Having helped a friend use Watson a year ago, I sort-of agree with Schank's
opinion in this article. IBM Watson is sound technology, but I think that it
is hyped in the wrong direction. And over hyped. This seems like a case of
business driven rather than science driven descriptions of IBM Watson. Kudos
to the development team, but perhaps not to the marketers.

Really off topic, but as much as I love the advances in many layer neural
networks, I am sorry to not also see tons of resources aimed at what we used
to call 'symbolic AI.'

~~~
musesum
Back in the 80's, there were two AI camps: Syntax and Semantics. My favorite
for syntax was Terry Winograd. For semantics was Roger Schank. Back then, I
was trying to map AI onto biology. Syntax was easier; you could easily map the
edges to a McCullough and Pitts neural model. Semantic nets was harder; the
edges were more of a way of modeling symbolic relationships. So, I couldn't
wrap my head around Shank. Wish I had; it felt like I was missing the point.

------
ACow_Adonis
I almost want to up vote the article on principle.

I try to go to various "machine learning" and "AI" meetups around my city.

The most frustrating, but relevant, lesson I've learnt is to just stay away
from everything IBM/Watson.

I can summarise every single bloody presentation they give because it's the
following:

"Now, when we say cognitive computing, we don't mean any of that sci-fi and AI
stuff, now here's a marketer and marketing materials that will explicitly
imply that we're talking about that sci-fi and AI stuff for the next 59
minutes. There will be no technical information."

~~~
unabst
> we don't mean any of that sci-fi and AI stuff

No one does.

Even with AlphaGo the hype was insane, but was mostly caused by people
confusing weak AI with strong AI with Arnold Schwarzenegger. Any material that
intentionally plays on this confusion is arguably false advertising and
fraudulent.

The truth is, it's still impossible for anyone to talk about strong AI,
because we have hardly been able to define what it even is. We just know we
have it, and it's still unlike anything anyone is working on that has made it
to the public. The people who write the papers absolutely know this. It's a
common goal, but we have yet to engineer anything that remotely resembles
strong AI.

For the most part, we're either still busy figuring out small but hard
problems, or hacking resemblance and avoiding the important problems
altogether.

~~~
Retric
It takes years of training before humans can demonstrate human level
intelligence. Unless the first strong AI is super human it's not going to look
like a strong AI for several years.

~~~
JoeAltmaier
It takes years of training before humans can be trusted with pants. Really, we
should lower our expectations of AIs by three or four orders of magnitude. If
they can walk in less than a year of trying, then they win.

[http://www.gocomics.com/monty](http://www.gocomics.com/monty)

~~~
panglott
Link requires login.

~~~
sp332
It doesn't for me. Here's a direct link to the image if they don't block
hotlinking
[http://assets.amuniversal.com/6c1332e0fdec01335e11005056a954...](http://assets.amuniversal.com/6c1332e0fdec01335e11005056a9545d)

------
kastnerkyle
Thinking of "Watson" more as a catchall term for machine learning research at
IBM is more useful than thinking of it as a unified platform (as the marketers
try to sell it). This includes research efforts in speech recognition, NLP,
and reinforcement learning as well as fun stuff like the "Watson chef". The
underlying technology is almost completely different, but it still falls under
the Watson umbrella.

In general, every company (startups and big co. alike) seems to be hyping
their "AI" capabilities out the wazoo, but three years ago saying those two
letters together was a death sentence. I don't know if this is a good or bad
thing (hype is bad, but general interest is useful for the visibility of the
field) but it is definitely a sea change compared to the last 5-10 years.

I am extremely skeptical of most claims these days, and am a bit worried about
AI Winter 2.0 due to hype around largely mundane technologies. There are
exciting things happening in the space, but these things are rarely hyped to
the extent the more mundane results with corporate backing are.

------
greenyoda
For those who are unfamiliar with the author of this article: Roger Schank is
one of the early pioneers of AI research:

[https://en.wikipedia.org/wiki/Roger_Schank](https://en.wikipedia.org/wiki/Roger_Schank)

~~~
lingben
don't mean for this to come across as snarky but what has he done lately? is
his best work from the 1980's?

~~~
Animats
That may be the problem. He sounds bitter.

~~~
fixermark
It does, unfortunately, have a bit of an "Old man yells at cloud" tone to it.
And I don't see any solid evidence for his arguments beyond appeal-to-
authority in that essay.

~~~
edtechdev
He may be a bit of a curmudgeon - his blog is called Education Outrage, after
all:
[http://educationoutrage.blogspot.com/](http://educationoutrage.blogspot.com/)

But he's done and continues to do a lot of good work. He helped found the
Learning Sciences (an offshoot of cognitive science, AI, psychology, etc.) in
the late 80s and early 90s. He continued and continues to do work in
education, including founding experiential schools and writing books on
education: [http://www.amazon.com/s/?url=search-alias%3Daps&field-
keywor...](http://www.amazon.com/s/?url=search-alias%3Daps&field-
keywords=roger+schank)

------
todd8
Patrick Winston taught my first AI course around 1974. Things have come a long
way, but back then I was flabbergasted to see a program perform Calculus
Integration. It seemed to be a task that took a certain amount of insight and
problem solving. Professor Winston then proceeded to break down the program
for us and to my surprise it easy to understand and wasn't very complex.

I'll always remember his comments at that point that AI is mostly simple
algorithms working against some database of knowledge.

I'm not sure I would still make that claim today, kernel based support vector
machines aren't all that straight forward and many of the cutting edge machine
learning and AI programs are far from easy to understand. Still, there is a
feeling of disappointment when the curtain is pulled aside and the great Oz is
revealed to be nothing that magical.

------
iamleppert
I'm wondering why, as a company, IBM doesn't seem to be doing good in their
core business, yet somehow they want us to believe they are at the forefront
of the latest darling new technology in ML research and cognitive computing?

If they are unable to attract talent and innovate in their core business, how
are they supposedly pursuing sophisticated AI, and the biggest question, is
why?

What other products or innovations have come out of IBM Research? What is
their overall reputation, and why should we believe them? Why don't they
release Watson to the world, like Microsoft did with their twitter bot?

If I were a recent grad or even mid level in my career and wanted to work on
as interesting projects as I could, I wouldn't be going to IBM. My first
priority would be access to interesting and varied datasets, such as what can
be obtained at Facebook, Google, Amazon, or another such company. A close
second would be any of the players in the hardware ML industry such as Nvidia.

I don't understand what's so special about Watson, it all seems like marketing
BS to me for a company in the death throws.

~~~
kryptiskt
"What other products or innovations have come out of IBM Research? What is
their overall reputation, and why should we believe them?"

1986 Nobel Prize in Physics for the Scanning Tunneling Microscope

1987 Nobel Prize in Physics for High-temperature superconductors

~~~
JoeAltmaier
...30 years ago. I'd suggest joining a research team that has done something
important since before you were born.

~~~
NEDM64
87 is 29 years ago.

------
xrd
Funny that he was Chief Learning Officer at Trump University. Doesn't diminish
my feelings for him, but interesting.

~~~
c3534l
Well, I suppose if anyone can recognize a fraud when he sees one, it's a
former executive officer and Trump "University."

------
hammock
What if Watson analyzed 800 million pages of Dylan critiques and analysis,
instead of 800 million pages of lyrics? I bet you could get to the anti-
establishment theme. Maybe Watson was just given the wrong set of input data
(garbage in, garbage out).

~~~
rrego
The themes it produced were not inaccurate when it comes to Dylan. The vast
majority (and his most beloved works) are not protest songs. Pretty much
everything he did after Bringing It All Back Home is not a protest song. Like
A Rolling Stone is definitely not a protest song. In fact, most of his work IS
about relationships in some form.

I would disregard what the author has to say about Dylan even though it seems
to be author's primary example. Dylan wrote a ton of songs and encompassed a
couple different personas through his career. He's not one thing.

~~~
rvense
Would Watson discover this, though? Even if you marked all the lyrics with a
year, would it be able to make that sort of inference? I don't think so, I
doubt it is able to form anything like a concept of time, or person, or a
person changing over time, especially not from an input of song lyrics.

~~~
facepalm
Why not? It is very easy, for example:

"if (sontexts[1980].contains("love") && songtexts[2014]contains("war"){print
"Dylan changed his focus from lovesongs to protesting against war"}

------
tekni5
What is with the poor grammar and spelling errors?

"Recently they ran an ad featuring Bob Dylan which made laugh, or would have,
if had made not me so angry."

"Ask anyone from that era about who Bob Dylan was and no one will tell you his
main them was love fades."

"Dog’s don’s but Watson isn't as smart as a dog either."

etc

~~~
Houshalter
The spelling errors confuse Watson.

------
drcode
I sure wish some mainstream journalists would look into the whole "Watson"
marketing campaign and apply some fact checking to it.

I like AI projects as much as the next HN reader, but compared to efforts by
the other players in this space (Google, Apple, Tesla, Amazon, etc) whenever I
hear a new marketing push about IBM Watson project my "BS detector" goes into
the red zone.

(That said, it would be awesome if I'm wrong and IBM really is making some
genuine advances...)

~~~
guelo
Talking about over-hyped marketing, it's ridiculous to include Tesla with the
other companies in your list.

~~~
drcode
Actually I'm also a bit of a Tesla skeptic, but at least they are shipping
actual practical AI algos in their cars and don't walk around intimating
they're going to cure cancer.
[http://www.cio.com/article/2397103/healthcare/can-watson--
ib...](http://www.cio.com/article/2397103/healthcare/can-watson--ibm-s-
supercomputer--cure-cancer-.html)

~~~
guelo
Tesla buys that tech from some outside vendor.

~~~
exhilaration
Mobileye: [http://www.mobileye.com](http://www.mobileye.com)

"It is used by nearly two dozen automakers, including Audi, BMW, General
Motors, Ford, and Tesla Motors." \-- [http://fortune.com/2015/12/17/tesla-
mobileye/](http://fortune.com/2015/12/17/tesla-mobileye/)

------
ramanan
Reminds me of a similar article covering the work of Douglas Hofstadter, the
author of GEB :

The Man Who Would Teach Machines to Think

[http://www.theatlantic.com/magazine/archive/2013/11/the-
man-...](http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-
would-teach-machines-to-think/309529/)

------
koytch
Apparently Watson believes Mr Schank has 'mixed' feelings towards it and IBM.
Go to
[http://www.alchemyapi.com/products/demo/alchemylanguage](http://www.alchemyapi.com/products/demo/alchemylanguage),
feed it the article link and see what happens :)

------
ktRolster
_AI winter is coming soon._

AI winter will come if it isn't able to latch onto strong business cases. For
the past few years, we've seen a slow uptake of low-grade AI, for example,
Siri. As long as those sorts of things continue, the winter won't have a
chance to set in.

It's true there is a lot of baseless hype around AI, but there's baseless hype
around every new technology (probably around everything that catches people's
attention). That said, if someone predicted that Watson is going to die, I
would believe them, because they haven't seemed to get much business traction
at all.

~~~
reality_hacker
ML as part of AI drives major industries already. Like web search and internet
ads.

~~~
ktRolster
I think it's more accurate to say linear solvers drive internet ads. Which
could arguably be characterized as machine learning, I guess.

~~~
reality_hacker
Mostly logistic regression, and sometimes deep learning. Both are part of ML.

------
happycube
The ~1990 AI Winter came about because DARPA money dried up after the
(pyrric?) failure of the Japanese 5th Generation Project and the end of the
Cold War. After those things not enough people could afford $xx+K LISP
Machines anymore ;)

As long as there's a continual source of money there won't be another one - so
basically as long as companies like Google and Facebook are profitable, and
preferably money to be made elsewhere, things should be good.

~~~
Animats
No, the AI Winter came because expert systems did not do much.

I did a Masters at Stanford CS in 1985, and met most of the big names of that
era. Stanford CS was dominated by the expert systems crowd, headed by
Feigenbaum, and the logicians, headed by McCarthy. AI was being taught almost
as philosophy. (Exam question: "Does a rock have intention?") You could
graduate without ever seeing an expert system run, let alone writing one. Very
little actually worked. But the faculty was claiming Strong AI Real Soon Now.
From expert systems, which are just rules you write and feed to a simple
inference engine. You get out pretty much what you put in. It's just another
way to program.

Feigenbaum was running around, testifying before Congress that the US would
become an agrarian nation unless Congress funded a big national AI lab headed
by him. Really. There were a number of AI startups, all of which failed. There
was a fad for buying Symbolics 3600 LISP machines, a single user refrigerator
sized box with a custom CPU.

None of this delivered. That's why there was an AI winter.

~~~
lispm
In 1985 much of the money going into Symbolics came from DARPA funding -
direct or indirect. Strategic Defense Initiative, Strategic Computing, etc. Up
to then almost all machines were sold into government projects.

That was a peak year for Symbolics.

In 1986 a Symbolics 3620 was about the size of a large tower PC.

[http://bitsavers.trailing-
edge.com/pdf/symbolics/brochures/3...](http://bitsavers.trailing-
edge.com/pdf/symbolics/brochures/3620_1986.pdf)

------
matchagaucho
_AI winter is coming soon_

It would appear so, IBM hype aside. From chat bots to image recognition and
playing Go; the media is having a field day around the AI theme.

If this hype feeds Investor and Consumer expectations, the next round of AI
startups are doomed to underperform.

------
tps5
I agree that there's a lot of AI hype and I suspect that we won't see all that
much come out of it.

At the same time, there's a bit of a cop-out that goes on when we privilege
our own cognitive processes over those of an AI just because our own minds
are, more or less, a black-box.

I think he does a lot of that in this article. At the end of the day "human
intuition" is just a filler until we figure out what's really going on.

------
DSingularity
Most interesting statement is the last one. Author seems to think we are about
to enter into another AI winter.

Seems odd given alpha go and recent success of deep learning.

~~~
drcode
Yeah, it seems pretty unlikely to me that there is an AI Winter coming, given
that we now have programs that look at a photograph and say "Women wearing a
hat, sitting on a bar stool and drinking wine" when 5 years ago such
capabilities were unfathomable- The kind of capabilities that are currently
being demonstrated have wide reaching applications and will take a decade to
filter into the rest of the economy, even if you pessimistically assume that
all research from now on reaches a complete standstill.

~~~
discardorama
> given that we now have programs that look at a photograph and say "Women
> wearing a hat, sitting on a bar stool and drinking wine"

That may be; but then there's also this:
[http://arxiv.org/abs/1412.1897](http://arxiv.org/abs/1412.1897)

~~~
taneq
So? Any generalizing learning algorithm necessarily accepts a large number of
inputs (including some unintended ones) for each possible output.

Humans are no more robust against this kind of attack than any other system -
consider stage magic, confidence scams, NLP, optical illusions, etc.

If you want a truly infallible system, then no, AI will never provide that.
What you want is a magic deity.

------
dingo_bat
>Suppose I told you that I heard a friend was buying a lot of sleeping pills
and I was worried. Would Watson say I hear you are thinking about suicide?
Would Watson suggest we hurry over and talk to our friend about their
problems? Of course not.

Although the author may be right overall, this paragraph certainly assumes a
lot, and is probably wrong. Computer systems have been able to make such
correlations for some time now.

------
digital_ins
I loved this article. Not just the term "AI", I've seen startups abuse the
terms "machine learning" and "big data" to such an extent that it literally
makes me cringe when I hear them.

How many times have you seen a TechCrunch article where the writer parrots the
buzzwords the founder has thrown at them such as "x uses machine learning to
sync your contacts with the cloud".

------
bitmapbrother
Usually when you call someone a liar you present plausible proof. No proof was
presented. Instead all we got was an opinion of Watson's cognitive abilities
from a former Trump University employee.

------
headShrinker
I get his cynicism from an idealist engineer's prospective. It's a problem for
anyone who has applicable knowledge that meets a marketing/branding agency.
Watson was a new tool that could play jeopardy and IBM needed a way to sell
the heck out of it. branding Watson as AI is the act of an increasingly
desperate corporation.

While true AI is a decade or two off, with each AI winter, an increasing
number of human jobs are displaced. This next wave promises to be devastating
to human productivity and a boon for machine productivity. The effects are
real even if the intelligence isn't. When true AI is birthed, it won't need to
be marketed. All that will be left are a few trillionaires, and food lines for
the rest of us.

About the future "Wealth will be based on how many robots you own and
control."

------
facepalm
His argument is that Watson supposedly doesn't have an opinion on ISIS. While
I don't know if that is even true, it seems like a very weak argument. Even if
it could only "think" in a very limited domain, it could still be useful.

The author mentions himself that today's 20somethings have never heard of Bob
Dylan, yet uses Watson's alleged ignorance of Dylan to dismiss it. Yet
20somethings are thinking entities.

Mostly it sounds like sour grapes, because his 1984 book didn't receive the
recognition he thinks it deserves.

------
YeGoblynQueenne
>> AI winter is coming soon.

Not really. There's a lot more private funding for AI nowadays and a lot of
research is happening in the industry, rather than in academia.

Machine learning is not cognitive computing, like Roger Schank puts it, but
it's championed by most of the large tech corps (count them: Google,
Microsoft, Facebook, IBM, Baidu). Those folks have the money to keep spring
going for a long, long time, much longer than last time.

Just being indignant is not going to achieve anything. Roger Schank and those
of us who think he's more or less right in spirit (if not in tone) have a very
simple way to prove his point: make it all work. Show why Good Old-Fashioned
AI is better than machine learning for achieving the goals it set itself back
in the early days.

But we've not been able to do that. That's the fault of the people, like Roger
Schank, who started various parts of the original AI project- and failed to
take it to completion. Again, and again and again.

Google, IBM and the rest will do what they need to do to keep the money coming
in and they'll fund a lot of research that way. The rest of us can suck it up-
or come up with something that works better. Noone's stopping us.

------
lwall_mba
Two points come to mind when reading the article and comments made here. One,
humans have personalities based on dualistic and conflictive emotions. Only
humans can love and hate at the same time at the same individual. AI is
focused on mimicking some behaviors based on stimuli, but behaviors is not a
personality in action. Personality is much more complex. Some Psychology 101
disperses all confusions around this. Two, the challenges around language
recognition can't be programmed unless one solves meaning. Linguists have
struggled to define meaning for decades; best definitions explain meaning
fluctuates culturally, historically, politically, and to some extent by the
individual.

------
mathattack
* People learn from conversation and Google can’t have one. It can pretend to have one using Siri but really those conversations tend to get tiresome when you are past asking about where to eat.*

Am I the only shmuck who thought Siri came from Apple, not Google?

~~~
rdtsc
Yeah to me it seems the author is angry and confused. Not sure why the article
is upvoted that much. I guess people just liked the clickbaity title.

~~~
mathattack
The author used to be a big name in AI and learning but now sounds like
someone complaining about going to and from school uphill both ways in the
snow.

------
nxzero
Really dislike the IBM meetups, they're full of marketing speak and people
they have no idea what a load of bs the "sponsor" heavy presentations are.
Meetup should ban groups like this.

------
ams6110
_They are not doing "cognitive computing" no matter how many times they say
they are_

Maybe not, but "cloud computing" is getting a little pedestrian. New
meaningless names used to market things we've been doing all along are a good
way to gin up some interest.

------
pavedwalden
Well, this makes me feel a little better that I could never make sense of
IBM's Watson advertising. Even after checking out their site, I couldn't
figure out what it was _for_ , much less what it was under the hood.

------
leoh
I've been impressed Watson/Bluemix and I think IBM is on an interesting track.
Marketing is not always effective at conveying the particular and stunning
work that engineers, such as those at IBM, accomplish.

------
phtevus
I couldn't concentrate on what the article was saying because there were so
many grammatical errors; I kept catching myself re-reading sentences,
replacing phrasing or inserting missing words.

------
sdneirf
I have always been confused too. Their approach is old school HMM GMM and hard
bulldozing. Not state of the art anymore.

------
ai_ja_nai
What do you expect from somebody that manages a unit called "branded content
and global creative"?

------
thesrcmustflow
I was wondering how long it would take before someone actually called IBM out
on this.

------
balx
Business Area Limited UK is seeking to expand its investments into innovative
computer software projects to turn over about 78 Million USD in medical device
, computer development and biotechnologies. If IBM is accepting external
investment portfolios.

------
balx
Business Area Limited UK is seeking to expand its investments into innovative
computer software projects to turn over about 78 Million USD in medical device
, computer development and biotechnologies.

------
tacos
Microsoft SQL Server added various machine learning primitives to their SQL
dialect. So not only can you query and summarize past data; now you can select
from the future as well. Bayesian, NN, clustering, the same old flower
matching demo, it's all in there. If you can jam enough numbers in you can
certainly handwave that you're getting insight out.
[https://msdn.microsoft.com/en-
us/library/ms175595.aspx](https://msdn.microsoft.com/en-
us/library/ms175595.aspx)

Of course basic data mining is certainly not where the latest research is, but
it covers many cases I see on HN or talked about in big data pitch decks.
Regardless, it all seems a lot less fancy when you can get the job done
issuing SQL commands that wouldn't confuse anyone who learned SQL in 1978. The
whole thing is oversold and now largely commodified, to boot.

If and when this stuff starts to show real results you'll certainly feel it.
The first wave of successful connect-the-dot bots will open up so many
discoveries that opportunities for human labor will swell. But it's not chess,
Jeopardy and a way to mine medical records. That's all obvious corporate
bullshit.

------
ACow_Adonis
A pity? IBM is DIRECTLY responsible for me not doing so any more. I've just
given up on it because you (IBM) waste my time and insult my intelligence.

And i sought you guys out and your company pissed in my face!

If every time you invite people to your party you serve nothing but turd
sandwiches, don't bitch about how we didn't give you credit for the
croquembouche you've got in the fridge out the back...

~~~
dang
This comment breaks the HN guidelines. Please (re)-read them and only post
comments that are civil and substantive from now on.

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

[https://news.ycombinator.com/newswelcome.html](https://news.ycombinator.com/newswelcome.html)

We detached this comment from
[https://news.ycombinator.com/item?id=11751662](https://news.ycombinator.com/item?id=11751662)
and marked it off-topic.

------
hackney
But I saw it working in a Bruce Willis movie. I think he was a hitman-lol.
Which is exactly my point. Computers will never be any smarter than the EXACT
commands we as their 'creators' give them. In essence they will never be but
an extension, a tool, of ourselves. But to say they can think? Not a chance in
hell.

~~~
eru
> Computers will never be any smarter than the EXACT commands we as their
> 'creators' give them.

You never tried debugging, did you? (Ie computers can do surprising things,
even when they do exactly as we tell them.)

Also, look at AlphaGo (or any modern chess engine): these programs play better
than their their programmers could.

~~~
hackney
When human beings can be defined by the rules and limitations of a chess
boardgame, I will agree. That will of course never happen.

