
Layoffs at Watson Health Reveal IBM’s Problem with AI - amynordrum
https://spectrum.ieee.org/the-human-os/robotics/artificial-intelligence/layoffs-at-watson-health-reveal-ibms-problem-with-ai
======
ryanmcbride
I've never interacted directly with IBM, but I remember about 2 years ago,
they had a talk at the Akamai Edge conference about how they dealt with
bloated cookies. The company I work for has the same problem so I sat in to
see what their solution was.

The problem they were having is that all the various IBM lines of business
added so much garbage onto the client's cookie, that eventually their pages
would stop loading because they wouldn't be able to parse the cookie.

The 'solution' is to detect when that's about to happen, and redirect the
client to a page that warns them that their cookie is too big (because IBM
made it too big) and give them a button to delete their cookie. They then
continue to start over and stuff more garbage into the fresh cookie.

That kind of problem solving pretty much made me lose faith in them
successfully doing much of anything anymore.

~~~
stefs
had to deal with ibm once. we were developing an application for a customer
and ibm was working on some new service for them. of course the customer
insisted we use the new ibm service, even though it solved a problem we didn't
have at all. the ibm product was a website to be used by end users, while our
application had to pass our end users data through. so, i asked the ibm guys
(via the customer) to give me the docs for their json interface, as i don't
necessarily want to parse HTML inside the application to exchange data. there
was no json interface - html was all they had -, but they got to work
implementing one.

what i got was:

    
    
        {"elem" : "html", "children": [ ... {"elem" : "form", "attrs" : {"action": "/an_url", "method" : "post"}, children: [{"input" : {"attrs" : {"type" : "submit", "value" : "..."}}}]} ..., ]}
    

yes, their json interface was the DOM, serialized as json. but it gets better.
to submit my data i had to fill in the actual values in their json struct at
the appropriate nodes and send it back.

i'm a) pretty sure this "json interface" feature cost the customer more than i
make in a year, and b) it probably broke the very instant the customer let a
frontend/designer guy change the html code months later.

~~~
ploxiln
Ah such classic outsourced third-world contractor output. They are told what
to do (by some PM who has no idea) and they do it. If they ever say "uh this
doesn't make sense" they are cheaply replaced. I've seen some of this nonsense
(though not IBM related):

    
    
        strncpy(dst, src, strlen(src));
        dst[strlen(dst)] = '\0';

~~~
zeusk
> dst[strlen(dst)] = '\0';

That line alone is cancerous.

~~~
ggg9990
Not a programmer. What’s so bad about it?

~~~
jonathankoren
It doesn’t make any sense.

The way a string is terminated in C is by a null character. ‘\0’ is a
mnemonic. strlen() returns the length of a strong. It does this by doing a
linear scan down the array until it finds the null, and the returns that index
as the length.

That line of code does nothing useful. It means, “scan through the array until
you find a null, then write a null there.” Which is exactly what you had to
start.

So why would someone write this? Well, they were told to make sure that you
always terminate your C strings with a null, otherwise you’ll run off the end
of the array and corrupt memory. This is true, but this isn’t how you need to
find the end of the array. They should have used the number from strlen(src),
not strlen(dest). Even worse, strncpy() will put the null terminator in dest
in this case.

The line belies any sense that the programmer had any understanding of what
the code was actually doing. It’s a amateur mistake.

~~~
zeusk
using strlen(src) is also an amateur mistake, because the whole point of doing
that (stncpy family, ensuring dst ends with null) is that dst might not be
large enough to fit string from src.

It has to be the size of dst, which can only be known at the site of
allocation in C (except for a non-clean way of looking at heap metadata).

~~~
jonathankoren
There’s no reason to believe from these two lines that the memory is
insufficiently allocated. It’s just two lines. The point wasn’t to share a
safe string copy routine, the point of the post was to illustrate a single
mistake.

~~~
zeusk
I'm not assuming that, but then you can't assume the memory is sufficient
either. If the string is from network or user, then you can never assume the
buffer to be of sufficient size (which they definitely do, because they aren't
tracking how much of src was copied into allocated dst).

------
glup
"Offering managers didn’t have technical backgrounds and sometimes came up
with ideas for new products that were simply impossible."

Sounds like they drank their own kool-aid, e.g., "Products That Enhance and
Amplify Human Expertise," rather than understand the actual limitations and
possibilities of ML. And it seems to me that they're still doing it with this
nonsense about a human-level "AI" debating stack.

The oversell seems a real shame in light of how much good can be done with EMR
and machine learning / NLP.

~~~
krona
That's surely part of the problem, but the catalyst is the marketing strategy
that is used to brainwash the employees. In essence; sell the experience,
_not_ the product.

This works well for IBM generally (the products are shit) but especially well
for Watson because it's extremely easy to sell AI without getting bogged down
in details. You want to identify brain tumors? We'll just teach Watson to do
it.

Whilst IBM research might be able to pull it off, it'll never get to market
because there is nobody capable of making good _products_ at IBM anymore.

~~~
cirgue
> This works well for IBM generally (the products are shit) but especially
> well for Watson because it's extremely easy to sell AI without getting
> bogged down in details.

The cynic in me says that every use of the term AI in any capacity is to sell
experience and not functionality. When was the last time you used a product
billed as 'AI' and thought 'wow, this is a huge game changer'? Siri is cool,
but it's ultimately not super useful. Google translate is incredible, but it
can only do what it can do because of the absolutely mind-boggling amount of
training data that google can access. Most disciplines have the problem of not
enough data, despite what 'big-data' folks say. In contrast, humans can
extrapolate and make reliable predictions about the future based on really
small sample sizes. We can pick up a new skill or recognize a new pattern with
a high degree of accuracy really effing fast compared to a computer. This
gives humans an enormous advantage. If IBM and anyone else in this space were
really focused on delivering excellent real-world results, step 0 is building
out world-class data integration and search tools (which we still actually
suck at, weirdly.)

~~~
nostrademons
I use & depend upon plenty of products that are built upon AI - GMail spam
filtering & categorized inbox, Google image search, YouTube & Netflix
recommendations, cheque OCR at my ATM, predictive keyboards on my phone,
Amazon's "people also buy with this product" feature, Google translate,
computer opponents in games that I play, and all of the signals that feed into
Google Search.

The irony is that not one of these bills itself as AI. It's just "a product
that works", and the company that produces it is happy to keep the details
secret and let users enjoy the product. So you may be right that the term "AI"
itself is pure salesmanship. When it starts to work it ceases to be AI.

[https://en.wikipedia.org/wiki/AI_effect](https://en.wikipedia.org/wiki/AI_effect)

Also - humans only look like we're fast at picking up new domains because we
apply a helluva lot of transfer learning, and most "new" domains aren't
actually that different from our previous experiences. Drop a human in an
environment where their sensory input is truly novel - say, a sensory
deprivation tank where all visual & auditory stimulation is random noise - and
they will literally go insane. I've got a 5-month-old and a project where I'm
attempting to use AI to parse webpages, and I will bet you that I can teach my
computer to read the web before I can teach my kid to do so.

~~~
cirgue
None of the things you mentioned are even close to AI. They’re applied
statistics, and they mostly use techniques we’ve known about for decades but
have only now found a use case because computing and storage is cheap enough
to make them viable.

~~~
nostrademons
The recommendation, translation, & image classification algorithms are all
done with deep-learning; that's considered AI now.

There was a time, not all that long ago, when SVMs, Bayesian networks, and
perceptrons were considered AI. That's behind the spam filters, predictive
keyboards, and most of the search signals.

There was a time, a bit longer ago, when beam search and A* were considered
AI. That's behind the game opponents.

As the linked Wikipedia article says, "AI is whatever we don't know how to do
yet." There will be a time (rapidly approaching) where deep learning and
robotics are common knowledge among skilled software engineers, and we won't
consider them AI either. We'll find something else to call AI then, maybe
consciousness or creativity or something.

~~~
cirgue
This is my point: the term AI has always been BS. It was BS when beam search
was AI, it was BS when expert systems were AI, and it is equally as BS when
applied to neural networks. It comes to the same thing: the 'AI' tools we use
are increasingly good function approximators. That's it. It's still reaching
the moon by building successively taller ladders.

~~~
hetzeljt
I think Judea Pearl would agree with you in part. From an interview in
[https://www.theatlantic.com/technology/archive/2018/05/machi...](https://www.theatlantic.com/technology/archive/2018/05/machine-
learning-is-stuck-on-asking-why/560675/) :

 _As much as I look into what’s being done with deep learning, I see they’re
all stuck there on the level of associations. Curve fitting. That sounds like
sacrilege, to say that all the impressive achievements of deep learning amount
to just fitting a curve to data. From the point of view of the mathematical
hierarchy, no matter how skillfully you manipulate the data and what you read
into the data when you manipulate it, it’s still a curve-fitting exercise,
albeit complex and nontrivial._

And

 _I left the arena to pursue a more challenging task: reasoning with cause and
effect. Many of my AI colleagues are still occupied with uncertainty. There
are circles of research that continue to work on diagnosis without worrying
about the causal aspects of the problem._

------
cubano
Isn't this just the same problem IBM has had for...well...most of my adult
life?

I had a contract gig at IBM Advanced Technology in Boca for about 18 months in
early 2001-2002. Talk about missing the boat...

I was brought in to prop up a soon-to-be-failed project for the Japanese
government...basically a "Napster for Tokyo" that would allow _paid-for-play_
C2C song sharing for customers of "the Big 5" record companies.

I asked simple questions that no one could answer...why would people pay for
content when it was so easily available via other means? you are using DRM
how??? really? you need a special player to play the music?

 _Why would anyone do that?_

I stayed for a few simple reasons...the fat consultant check I cashed every
Friday. Exposure to some outstanding engineers and coders where I got to learn
from true talent. The great strip club on A1A next to my rental in Lauderdale-
by-the-Sea.

But reading over this article reminds me that IBM is just too big to get out
of its own way, and has been for the longest time.

[edits]

~~~
bluetwo
Getting a nice check while doing nothing actually productive is soul crushing.
Good for the short term but it's a Chinese water torture in the long run.

Two years ago at their vegas conference they had a coffee shop that used AI to
recommend coffee types. I thought "boy, they don't understand this
technology".

~~~
pinewurst
Many years ago, when AI was expert systems and "neural networks" were fringe,
the main demo for one of the public expert system leaders was the Wine
Advisor. You'd tell it what you were going to eat and it would recommend a
wine.

~~~
numbsafari
And behind the scenes it was probably the equivalent of flipping a coin
between red and white.

~~~
pinewurst
It was totally rule-based. More complex systems had a little more
probabilistic stuff via Bayes and "certainty factors", but not this one.

I worked on another one for this company called Vibration Advisor which
diagnosed odd noises in GM cars.

~~~
kprybol
Being rules based isn't necessarily a bad thing or disingenuous. I develop
healthcare AI products (ML/DL researcher) and we actually aim to be able to
translate our models into a rules based engine (find a strong signal,
interpret/understand model well enough to translate/embed into a rules engine,
look for a new signal in our models, rinse + repeat). We end up deploying a
mix of rules based and true ML based models into production but it may not be
immediately obvious to the end user which type of model they are using.

~~~
pinewurst
I didn't mean it as being disingenuous - that's precisely the value that was
sold and if you could do the proper "knowledge engineering", it worked well.
It's just interesting to me having seen the previous turn of the AI hype
wheel, how much is being repeated.

Another interesting thing was the transition from special purpose hardware -
Lisp machines - to C code on commodity platforms. A contrast from today's ML
moving in the other direction.

~~~
kprybol
That's fair. Google's recent paper on predicting patient deaths is another
good example of this (logistic regression + good feature engineering performed
just as well as their deep learning models, and the logistic regression has
the added benefit of being significantly more interpretable and as a result,
actionable).

It'll be interesting to see when specialized ML focused silicon will become
readily available. Right now I find ML libraries that are able to run on
blended architectures (any combination of CPU and GPU's) much more
exciting/impactful than TPU's. The ability to deploy on just about any cluster
a customer may have available is huge.

~~~
fjsolwmv
In the near future customers don't have clusters, cloud providers offer
elastic adaptive compute sharing.

~~~
kprybol
From my experiences (currently work with several Fortune 100 health
insurers/benefits managers, and have previously worked for another large
insurer, a major academic medical center, and a large pharma company),
healthcare organizations tend to be rather cloud adverse (most of our
contracts very explicitly forbid us from using any form of 3rd party cloud
computing). So while I agree that much of the heavy lifting will shift to the
cloud (or already has), I expect health analytics will continue to favor on-
premises solutions (GPU’s still tend to be pretty rare compared to CPU based
clusters but are slowly becoming more common).

------
brootstrap
This stuff is the friggen worst if you ask me. i'm so done with the AI hype
train and IBM is the worst. Linked article mentions a 'watsons law' (similar
to moore's law etc). If you ask me, it is more likely for watsons law to be
that all commercial BigCo 'AI' offerings will burn thru hundreds of millions
and ultimately fail rather then the intended meaning.

"Phytel’s contribution was analytics paired with an automated patient
communication system. A clinic could use the system to search its patient
records and find, for example, all the men over age 45 who were overdue for a
colonoscopy, and then use an autocall to remind them to schedule the dreaded
appointment"

This shit isnt AI it's literally a database query and then some 3rd party
library to send a text message or a phone call.

~~~
cirgue
Watson's law: as the complexity of technology and business processes
increases, the amount of time it takes for people to recognize and acknowledge
that the emperor in not in fact wearing clothes increases in proportion to the
profitability of the lie being sold.

~~~
scardine
"The Emperor's New Clothes" are made of the finest blockchain silk adorned
with golden AI brocades.

------
cs702
The problem at IBM is not technological; it's _managerial_.

For a long while now, IBM has been treating "AI" as a product that can be
managed, packaged, and sold by "general" business managers -- think MBA-types
with only a superficial, qualitative grasp of deep learning and AI. Doing that
with rapidly evolving technology is a _sure-fire recipe for failure_.

Most such MBA-types today are _ill-equipped_ to manage, package, and sell
"AI." They're roughly in the same position as English or History majors who
are asked, say, to manage, package, and sell a new kind of quantum-computing
technology without knowing or understanding much about quantum physics. The
technology is moving faster than their ability to keep up.

IBM's mismanagement is a shame, because the system they showcased nearly a
decade ago -- the one that competed and won in Jeopardy -- was state-of-the-
art at the time.

~~~
erikpukinskis
As of 2018, after seeing how terrible “engineer-types” can be at engineering
management, the MBAs are starting to look better to me.

~~~
airstrike
As an MBA who browses HN, I'm torn between the two opinions

~~~
jeffjose
As an engineer who has an MBA from HSW, I'm even more torn.

~~~
cs702
erikpukinskis, airstrike, jeffjose: I did not criticize MBA's _in general_!

My comment mentioned specifically "MBA-types with only a superficial,
qualitative grasp of deep learning and AI."

MBAs who understand what they're managing (and who know what they don't know)
are not in that group. And BTW, I suspect most MBAs who read HN are not in
that group either :-)

------
dmix
> They couldn’t decide on a roadmap,” says the second engineer. “We pivoted so
> many times.”

> Both Phytel engineers say the offering managers didn’t have technical
> backgrounds and sometimes came up with ideas for new products that were
> simply impossible.

The death knell of all (potentially) good products. I don't know why this is
so often the case. All software companies need engineers involved in product
development decisions. Period. It's not optional.

Facebook who was smart about this. They hired or retrained technical people to
fill many business roles in marketing, product development, project
management, etc.

I'm not sure why technical people are restricted to merely being the builders
in these companies. Lots of other companies recruit internally from people
familiar with the end product and train them in other business areas.

> these potential customers weren’t impressed. Instead they asked for
> something resembling Phytel’s old system.

So they simply imagined a new product without interviewing potential customers
beforehand on what they actually want? They spent years merging databases of
two big systems, pivoted multiple times, to find out there wasn't a market for
it in the first place?

Why aren't the 'offering management' people getting fired?

------
aresant
A little case history - there were two illuminating threads ~10 months back
where several current & former IBM employees commented on the growing
disconnect between the reality vs. marketing of Watson - looks like vindicated
by today's news:

(1)
[https://news.ycombinator.com/item?id=14979642](https://news.ycombinator.com/item?id=14979642)

(2)
[https://news.ycombinator.com/item?id=14766793](https://news.ycombinator.com/item?id=14766793)

~~~
amptorn
I've been saying Watson is a red herring for years:
[https://news.ycombinator.com/item?id=11262397](https://news.ycombinator.com/item?id=11262397)

------
apo
The health care landscape is strewn with the wreckage of software companies
who thought the latest shiny software doodad could cause a "disruption" like
it had in so many other industries.

People who know that healthcare is different try to warn them. They don't
listen. Instead they charge in with people who have no experience in the
field.

From the article:

 _After the acquisition, IBM management started the process known internally
as “bluewashing,” in which an acquired company’s branding and operations are
brought into alignment with IBM’s way of doing things. During this
bluewashing, “everything stopped,” the first Phytel engineer says, and the
workers were told not to focus on improving their existing product for current
clients. “People were sitting around doing nothing for almost a year,” the
second engineer says._

~~~
TheM00se
Than Tharanos decided to jump onto that bandwagon and make a dumpster fire

------
chefandy
IBM's organizational structure is book-ended with great talent. The engineers,
developers, and even front-line managers are really fantastic, and the people
at the very top are pretty good.

In the middle, there are 100 layers of middle managers that completely cock
everything up, and the really sad part is that they have enough say to really
cause damage. One of my first proper white collar technical jobs with them was
an L2 support job for this network performance monitoring suite for huge
networks... mostly large, national ISPs and the like. The job required maybe a
just-post-jr-level sys-admin knowlege of networks and UNIX systems while also
having smooth customer service skills. Definitely a great step up from my
previous lower-mid-level IT jobs and call center work.

I had three(3) managers. Three! I had a technical manager, a non-technical
manager, and my actual manager, who was the head of the department.

At the highest levels, the management was talking about switching everybody's
workstation over to Linux. Everybody from admin assistants to developers to
managers was supposed to be moved off of Windows at some point in the
relatively near future. I was psyched— I hated windows, and the product I
supported ran on Solaris, so not having to deal with the extremely primitive
(at the time) tools like Cygwin to get some UNIX functionality on my machine
was great. They seemed to be positioning themselves to sell the consulting for
other large companies to do the same thing.

Though we got no word of this internally— I only knew from what I had read in
articles— I found the internal workstation disk image on the intranet and
eagerly installed it. It was pretty smooth! I was excited! As I was getting my
tools set up, I noticed that it didn't have the internal bug/ticket tracking
clients installed, so I cruised on over to their intranet page... hmmm,
nothing listed for Linux. After hours of searching, I found some internal
discussion showing that, months earlier, the department that writes that
software unilaterally decided that they were discontinuing their initiative to
port those applications to Linux. While there was an extremely limited CLI to
these tools, critical functionality was literally impossible without the GUI
app. Without the ability for anybody on their Linux workstations to interact
with tickets or bug reports, the Linux initiative was pretty much dead-in-the-
water for most technical people and their managers.

Perfect example of just how badly their forest of middle managers completely
messes up great executive initiatives that the bottom of the food chain really
wants to embrace.

(I might have gotten some of the details wrong. It was 13 or 14 years ago and
I drank a lot back then.)

------
alistproducer2
I'm in my final week at a large company and so much of this article rings true
for me as well. I feel like the structure required to coordinate really big
companies has some really negative emergent properties that make it very
difficult for such a company to be efficient and innovative.

Look at a company like Google, which, without a doubt, has some of the best
engineers in the world. How many false starts and just flat out poorly
executed projects/products have they had in the last 10 years? Way more than
you would expect from a company that puts such a premium on hiring the best.

------
crsv
It only reveals IBM's problem with AI if you were at any point under the
impression that it was something more than marketing for them and watch actual
market signals.

The audience that gobbles up their ad campaign during the Masters that touted
their "block chain" logistics probably wouldn't even notice that they had
layoffs at Watson health.

------
phamilton
Is Watson a codebase? Or is it just a brand?

It seems to me that Watson is basically just IBM's version of AWS/GCE services
(at least the non infra ones). But it gets thrown around as a buzzword so
often. The marketing makes it look like there's a single AI codebase that can
be accessed through a bunch of APIs, but I would be very surprised if that was
actually the case.

~~~
mindcrime
_Is Watson a codebase? Or is it just a brand?_

Both, sort of. There is a "thing" called Watson, which is related to the
Watson that played Jeopardy. But "Watson" is also a brand which lumps in stuff
that has absolutely nothing to do with the "old" Watson.

To illustrate a bit.. "Watson Health" is (or was) made up of a ton of people
and technologies who came into IBM as the result of several acquisitions:
Truven, Phytel, Explorys, etc. In many cases, they repackaged stuff from those
vendors, gave it a "Watson name" and shipped it. And some of this stuff was
literally no more sophisticated than linear regression / logistic regression,
etc.

~~~
vanadium
"IBM Watson Marketing Automation" being another I was made painfully aware of
recently, which was the result of the Silverpop acquisition at least in part.

------
manigandham
Watson seems to be hyped as powering the entire world but I have yet to see a
real project using it in any capacity. I haven't even heard a cohesive
description of what Watson even is, beyond surmising that it's a suite of AI-
like services, although any APIs seem to be hidden within the broken IBM cloud
interface, perhaps on purpose.

------
lacker
This sounds like a case where the AI is used in a "marketing" way to make
people interested in a product, and then there isn't really much AI involved,
and the people developing the AI have a struggle to prove that it's relevant
to the business.

I wonder if DeepMind at Google has a similar problem. It is certainly getting
a lot of headlines, but there are plenty of other AI groups within Google that
do business-relevant things like improve search or ad matching or make Google
Home's voice recognition work. I would not be surprised if in the long run
DeepMind becomes a group that performed a neat stunt with Go, but kind of
fades in practical relevance, like Watson with Jeopardy.

~~~
kprybol
I've always viewed DeepMind as more of a skunk works program and less as a
profit driven enterprise. DeepMind exists primarily to push the limits of what
can be done when you put group of leading researchers together in a room,
provide them with nearly limitless resources, and simply tell them to "go". I
expect some of that effort to eventually trickle down into Google's consumer
products (maybe a healthcare focused version of AutoML
[https://cloud.google.com/automl/](https://cloud.google.com/automl/)). Google
has already done a lot of work on the HIPPA side of things
([https://cloud.google.com/security/compliance/hipaa/](https://cloud.google.com/security/compliance/hipaa/))

------
m15i
IBM Watson is bad for the "AI" community because when non-experts see IBM
repeatedly fail they assume the whole field is nonsense. Hopefully IBM will be
more cautious in what they claim to be possible. Hype and deceit do not belong
in healthcare.

~~~
gaius
IBM literally does not care about “the AI community”. They will strip-mine the
AI hype then move on.

------
OliverJones
IBM. Twenty years ago my late father-in-law had his ancient DisplayWriter
(1st-gen word processor) break down.

He called his local sales office. Somebody said, we've got a couple in a
storeroom someplace. I'll bring one over. No charge.

That was customer service. That's how they built their reputation. Now they
seem to be squandering it.

Sounds like they're headed now in a direction where they sell their artificial
intelligence as being smarter than their customers. Sounds like they insist on
disrupting their customers rather than their competitors. That Doesn't
Work.(TM).

Every time somebody does that to a hospital it gets harder for other vendors
to sell actually-useful stuff to health care operations. Not good.

------
hellofunk
I'm going to play devil's advocate and call it:

Self-driving cars have killed pedestrians, Watson isn't doing all they
wanted... is this the dawn of a new winter?

~~~
BigChiefSmokem
I found out that Tesla's autopilot technology can't even really detect
stationary objects. Albeit this is hard technically as it's actually a point
of reference physics problem but the writing on the wall is clear: marketing
departments are overpromising and no one is really delivering big on AI, not
even Google as a lot of other companies have already matched their efforts
(Microsoft, Waze, etc.). IBM? Give me a break.

Anyways, may I interest you in cheap VR headset?

------
abdulhaq
They should have asked the AI how to save their jobs. Unless of course it's
not really an AI, but just marketing hogwash. Surely not!

------
jacobsenscott
Winter is coming. And none too soon. So tired of AI headlines.

~~~
randomsearch
It would be really great if we could somehow hold those overselling AI to
account.

This may be possible this time round, because we’ll have a very good record of
who said what and when via the web.

Without any kind of accountability, history will continue to repeat itself.

How about hyperbole.com, where you can google academic researchers and
industrial leaders and pull up quotes from them, dated and fact-checked.

I’m sure you must be able to train a deep net to do this. They can do
anything.

~~~
pessimizer
Pundits who amplify the current bullish buzzwords will never be punished,
which is why they do it.

------
warrenm
Does it really qualify as "news" any more when there are layoffs at IBM?

~~~
stephengillie
Who makes money by pointing out flaws at IBM? Their stock on NASDAQ is 139.31
as of Jun 25 12:58 PM ET. Is there a large enough short position to be worth
buying a news article?

~~~
downrightmike
Already priced in and layoffs have an opposite effect.

------
eksemplar
I’m not too surprised, they come by our shop around once a year wanting to
sell Watson analytics, and because ML is a buss word in the political layer
(our upper leadership) we politely listen.

It’s not really great. They can automate the process of finding reports, but
the truth is, we have people already doing that and all the reports they’ve
been able to find in their POCs were either useless or some we already had.

I’m sure ML has potential, but our analytics department is run by economists
and political scientists who can really utilize ML and I’m not sure they could
be retrained, even if they wanted to, which they don’t.

We can’t really hire ML personal either. The only people who do it well enough
are PHDs, and they have much better offers, and we’re at least 5 years away
from having a candidate pool of people with the right education mix (Data +
political science) and probably 10 years away from figuring out how to utilize
them as well as finding enough funds for a position.

So ML is mostly left to independent companies that corporate/consult with
muniplicitaties in projected owned and run by the municipalities). IBM simply
doesn’t do this, and the consultant companies that doesn’t suck all work with
something else.

Of course this is the perspective from the Danish public sector, it might be
different else where, but I have friends who work similar positions as mine in
banking, and they’re telling me the same thing.

~~~
TTPrograms
>The only people who do it well enough are PHDs

This is really untrue. Maybe that's part of your hiring issue.

~~~
eksemplar
It’s always cute when people say stuff like that. We’ve tried quite a few and
none have delivered any sort of useable quality.

~~~
pimmen
Swedish here, I'm convinced things are not that bad across the sound. I've
seen people with masters degrees deliver really good ML systems. In my team
the person with the highest education is also doing a PHD but the majority of
us have masters degrees. But, sure, I've also met other companies in the same
business who have multiple PHDs doing ML and they'd made a lot more progress
than we had in some areas, but we'd made more progress than them in others
(one company in Norway I was especially impressed with). ML development is
very different from regular, software development and requires an affinity for
math and creativity, sure, and a research background helps out a lot but I
would not go so far and say that they're the only ones who are any good.

Have you tried hiring people who have documented experience doing ML
commercially? Because I could understand how you get bummed out a lot if you
hire recent grads to do ML.

------
xchip
well, to begin with, the heart rate signal on the logo is upside down :)

------
rossdavidh
Look I think everybody needs to cut IBM some slack here. Integrating
technology with customer needs is hard, and it takes a while to get good at
it, to get a process for reconciling with product managers want with what
developers can actually make. Once IBM has been in this technology game for a
little while, I'm sure they'll get the hang of it.

~~~
DoofusOfDeath
People aren't faulting IBM for tackling hard systems-integration problems.

People are faulting IBM for over-promising, under-delivering, using misleading
advertising, and internally seeming to have foolish management practices.

~~~
detaro
given this line

> _Once IBM has been in this technology game for a little while, I 'm sure
> they'll get the hang of it._

I'd assume the parent is being sarcastic

~~~
rossdavidh
...and perhaps a little too subtle. Or maybe just not funny enough to be
apparent.

~~~
DoofusOfDeath
Nah, it was funny, just a little too subtle for me at the time.

------
manolo7219
Haha... now IBM needs to find some nice wording in order to impress clients to
continue paying for smoke and mirrors believing in promises that "one day it
will be so good". There is no measurable substrate to any of promises and
whoever still believes in it, just hides its personal failure to see that on
time. Some big clients will continue following "all bets in" method as they
are already too exposed to IBM and to admit that they were wrong and naïve as
children would mean also losing their nice paid jobs. I am happy that after
everything unveils with IBM, their companies will feel a full blow and their
investors will see how it looks when you are not keeping your management on a
short leash.

------
dmix
Anyone know what "RA" stand for? The employees kept mentioning it on the
"Watching IBM" blog. I couldn't find a decent match via Google. For ex:

> IBM Watson Health has initiated a significant RA across multiple offices.

~~~
framebit
RA = Resource Action. Add it as the next entry on a long list of euphemisms
for layoffs.

~~~
rossdavidh
"layoff" being a euphemism for "mass firing of people", that has been around
so long it doesn't even seem like a euphemism any more. As you said, it's a
long list.

------
vasundhar
They can't just build a system, and leave it for the customers to do all the
hard work of integrating various systems. Integration is the key and should be
seen as product in itself.

If IBM stops milking the cow like the services, asking the customers to
integrate, It will die a miserable death, and unfortunately a premature one.

It should be pluggable, like set of pick and choose building blocks of
interfaces, that only should take domain expertise and custom specifications
as input from the client (Which ever domain.) Until then AI only will become
ambitious Sunk cost.

------
zmmmmm
I think their strategy of collectively branding every single thing as Watson
is actually pretty risky. Branding is powerful but it cuts both ways ... it
will only take one high profile failure for that to ruin the brand across
every product that has slavishly adopted it. Even here you see it happening -
layoffs in one part that got labeled as "Watson" are immediately seen as
indicative of a much bigger problem with Watson. That would not have happened
if they were independently branded.

~~~
newen
Yeah their branding of Watson is pretty bad since no one knows what Watson is
anymore. It used to be the thing that played Jeopardy. Ok fine, we can
generalize that to Watson being an NLP driven infromation retrieval system.
But then I got pitched recently as Watson being an Nvidia DIGITS style system
on the cloud. It was only then that I realized Watson is just a brand name for
anything AI remotely AI related from IBM.

------
madrox
I'm not surprised. My company seriously considered Watson when we were looking
at AI platforms. Their sales team is pretentious, and when you scratch the
surface their offering isn't all that different from Google.

Undoubtedly IBM has done amazing work popularizing AI with Jeopardy and the
debate bot. However, I think that halo extended to its competitors that,
frankly, offer more things developers really need.

------
derEitel
I was happily surprised the other day when the IBM debater was presented and
there was no mention of it being Watson anywhere on their website. They really
need to stop with this personification.

------
victor106
The problems with Watson does not mean that this is a problem with AI. If
history is any guide IBM fails but eventually succeeds during most
technological revolutions (PC, ecommerce etc)

~~~
DoofusOfDeath
Can you restate that more precisely?

In particular, I'm not sure what sets comprise the numerator and denominator
of "most technological revolutions".

------
known
Sounds like
[https://en.wikipedia.org/wiki/Peter_principle](https://en.wikipedia.org/wiki/Peter_principle)

------
progr4mmatic
I think I saw that one of Watson’s engineers is now working on a crypto
project. SingularDTV? Heh, shows how much confidence was left in the program.

------
adamrezich
Every Watson ad I see on TV freaks me the hell out. Seeing primetime
television commercials for business AIs feels too uncomfortably cyberpunk for
me

------
paulsutter
IBMs Problem with AI is that they don’t have any AI

------
off2seethecloud
Pay no attention to the man behind the curtain!

------
killjoywashere
Next month will be a good time for a data-rich medical system to offer terms
to IBM.

------
netinmate
IBM has become Unisys, they just don't know it.

------
dingo_bat
I know very little about AI, and even less about management. But phytel's
story as told in TFA is depressing af. I wonder why nobody is held responsible
for this catastrophic failure. Why isn't the head of Watson health out on the
streets looking for another job?

------
aviv
What is wrong with our society? We need AI to tell us how to be healthy? Is
this real? Sometimes I feel like I live in the twilight zone.

PSA for anyone with a disease that man and corporate picked a name for - you
are feeding yourself towards that disease. Fix your diet (and 100% rid
yourself of animal products, processed sugar, wheat, and any "food" that comes
in a package with a label on it - hell stay away from anything that isn't a
fruit or a vegetable), go on a dry / water fast, and heal yourself. Stop
relying on doctors, AI, whatever the hell they produce supposedly for you.

------
realworldview
Yeeeeep

