Hacker News new | past | comments | ask | show | jobs | submit login
Layoffs at Watson Health Reveal IBM’s Problem with AI (ieee.org)
570 points by amynordrum 11 months ago | hide | past | web | favorite | 250 comments



I've never interacted directly with IBM, but I remember about 2 years ago, they had a talk at the Akamai Edge conference about how they dealt with bloated cookies. The company I work for has the same problem so I sat in to see what their solution was.

The problem they were having is that all the various IBM lines of business added so much garbage onto the client's cookie, that eventually their pages would stop loading because they wouldn't be able to parse the cookie.

The 'solution' is to detect when that's about to happen, and redirect the client to a page that warns them that their cookie is too big (because IBM made it too big) and give them a button to delete their cookie. They then continue to start over and stuff more garbage into the fresh cookie.

That kind of problem solving pretty much made me lose faith in them successfully doing much of anything anymore.


had to deal with ibm once. we were developing an application for a customer and ibm was working on some new service for them. of course the customer insisted we use the new ibm service, even though it solved a problem we didn't have at all. the ibm product was a website to be used by end users, while our application had to pass our end users data through. so, i asked the ibm guys (via the customer) to give me the docs for their json interface, as i don't necessarily want to parse HTML inside the application to exchange data. there was no json interface - html was all they had -, but they got to work implementing one.

what i got was:

    {"elem" : "html", "children": [ ... {"elem" : "form", "attrs" : {"action": "/an_url", "method" : "post"}, children: [{"input" : {"attrs" : {"type" : "submit", "value" : "..."}}}]} ..., ]}
yes, their json interface was the DOM, serialized as json. but it gets better. to submit my data i had to fill in the actual values in their json struct at the appropriate nodes and send it back.

i'm a) pretty sure this "json interface" feature cost the customer more than i make in a year, and b) it probably broke the very instant the customer let a frontend/designer guy change the html code months later.


Ah such classic outsourced third-world contractor output. They are told what to do (by some PM who has no idea) and they do it. If they ever say "uh this doesn't make sense" they are cheaply replaced. I've seen some of this nonsense (though not IBM related):

    strncpy(dst, src, strlen(src));
    dst[strlen(dst)] = '\0';


> dst[strlen(dst)] = '\0';

That line alone is cancerous.


Not a programmer. What’s so bad about it?


It doesn’t make any sense.

The way a string is terminated in C is by a null character. ‘\0’ is a mnemonic. strlen() returns the length of a strong. It does this by doing a linear scan down the array until it finds the null, and the returns that index as the length.

That line of code does nothing useful. It means, “scan through the array until you find a null, then write a null there.” Which is exactly what you had to start.

So why would someone write this? Well, they were told to make sure that you always terminate your C strings with a null, otherwise you’ll run off the end of the array and corrupt memory. This is true, but this isn’t how you need to find the end of the array. They should have used the number from strlen(src), not strlen(dest). Even worse, strncpy() will put the null terminator in dest in this case.

The line belies any sense that the programmer had any understanding of what the code was actually doing. It’s a amateur mistake.


using strlen(src) is also an amateur mistake, because the whole point of doing that (stncpy family, ensuring dst ends with null) is that dst might not be large enough to fit string from src.

It has to be the size of dst, which can only be known at the site of allocation in C (except for a non-clean way of looking at heap metadata).


There’s no reason to believe from these two lines that the memory is insufficiently allocated. It’s just two lines. The point wasn’t to share a safe string copy routine, the point of the post was to illustrate a single mistake.


I'm not assuming that, but then you can't assume the memory is sufficient either. If the string is from network or user, then you can never assume the buffer to be of sufficient size (which they definitely do, because they aren't tracking how much of src was copied into allocated dst).


Each line, individually, is total nonsense.

It never makes sense to use strlen() for an argument to strncpy(). Only the size of the destination buffer makes sense for strncpy() (usually sizeof(dst)). For other cases you would probably use memcpy().


pace @zeusk:

I thought the point was that they didn't know if the dest is long enough to copy source, so they copy as much of it as they can.

The reason for the '\0' is in the case that src is shorter than dest.

A more explicit way may be to strlen dest > src then just use lenght(src) otherwise use str(dest).

But this might actually be more efficient.

Edit: And in this case Spolsky doesn't apply because it seems the developer doesn't get to allocate the dest length.


Well...

If dst is larger than src, the strcpy* family of functions will also copy over the null byte.

If dst is shorter than src, and if strlen(src) is used as arg for strncpy then the function will overflow in dst and your null byte will also be outside dst buffer. In hardened environments you wouldn't trust strings to contain null byte if it comes from network or user and use strlcpy with known size of dst then append a null byte at end of dst anyway to ensure it is null terminated.

Using strlen to compare buffer sizes is totally wrong and is the source for many bugs (hint, strlen doesn't actually return size of anything but the distance of the first null byte from the address you provide it).


Thanks. Great explanation.


See also: Shlemiel the painter [1]

[1] https://www.joelonsoftware.com/2001/12/11/back-to-basics/


That line belongs in a hall of shame somewhere in the internet.


> by some PM who has no idea

This is the real problem. That PM of course wants the work done yesterday and has ego issues so even when the contractor does question it they just double down on what they originally said.


> outsourced third-world contractor output

Do we know that the developers were outsourced or which countries they were from?


Yes, in an inferred way at least. Look at the IBM jobs / careers page and locations. You can see where different types of openings are. Last year I looked at the AI jobs because I was curious where they were doing the work. Mumbai was first in terms of open positions for machine learning, if I recall correctly.


That tells us nothing about where the code discussed higher up this thread was written.


Sounds like total failure of management


I'd tell you to get in touch with an Senior Architect but that's no guarantee that you'll get a good result. From my experience . . .

IBM sold a big outsourcing package to a major European insurance company: they would move ALL their stuff to the IBM Cloud (VMWare with a lot of custom BPM workflows to handle provisioning, etc.). None of this matched their other three big cloud projects, Bluemix, Softlayer, or their OpenStack project (Zenith -- my prior project), so it was all new from the ground up.

They needed to handle mass imports of DNS data from this insurance company (~1 million hosts) and other customers and they brought me in as a senior developer to do the grunt work. Since we were using ISC bind, I assumed we could use features like exporting zones with signed requests and take advantage of all the infrastructure work put into that server to scale up. Our Senior Architect poo-pooed that plan, so I started modeling the DNS records as structured data for consistent and complete import and export. Our senior architect also nixed that idea and said that we're gonna load the DNS structures with REST requests: the support people were gonna have a custom web application to load simple DNS export files (not zonefiles, mind you) into this system, and to make ad-hoc additions and changes. (When we had a new developer join, he had the same questions I had about why we weren't using all the DNS bind features since they were done and tested and perfect for our needs . . . it took a month to get him to stop asking.)

I got about 90% of the way through a solution with a Python Flask REST server at the center, high code coverage through unit tests (because it's Python and I was still finding edge cases that needed fixing to the very end), and Rabbit MQ for enqueuing the changes we've received before writing them out to DNS zone files. It works, but we certainly didn't need to blow April to August building out the architect's vision.

The reward I got was being pulled off the project when it was nearly code complete and would shortly go into production. I got a team based in India to manage in the mornings, while my afternoons and evenings were returned to developing Javascript for the BPM infrastructure jobs.

(When told I was going to leave for another company, IBM's counter-offer was to move me from my remote work-at-home location in Ohio, near family, to the Cloud Managed Services office in Rochester, MN, and then consider a raise.)


> yes, their json interface was the DOM, serialized as json.

This is pure thedailywtf material.


Did the change request come complete with specific requirements on what the interface should include? Because it sounds like they produced exactly what was asked for: a JSON interface. If the requirements were poorly specified, you can’t lay all the blame on them.


Did the change request come complete with specific requirements

I get that you do need specs but c'mon, if you ask a plumber to fit a pipe from A to B and he uses one made out of cardboard would you think it was reasonable for him to say "well you didn't specify it had to carry water, pay me 3x as much to put it right". And then you find out that this is the first time he's even seen a pipe, but his business card says "senior plumbing expert".

You expect a basic level of competence and familiarity with the problem domain and good faith from any professional whose services you hire.

Also consider that a program is a spec, and if you are going to explicitly specify every little detail then you might as well just write the program yourself, there is literally no sane reason to put an outsourcer in the middle.


Unfortunately, that's not how the game works with the large services firms. The salespeople and account/project managers run the show. Their business isn't selling expertise / solutions, it's selling the maximum amount of billable hours at the highest possible margin via the illusion of expertise / solutions. (I say illusion because if you read the fine print of one of their typical contracts, there's very little actually promised other than providing warm body of type X for hourly rate of Y)


Exactly, this is everything that I hate about requirements management and requirements-driven-development in a nutshell. Way too often it just means: "we're actually too lazy / overworked / uninterested / have targets that don't align with the long-term best outcome, so we'll just force you to write down requirements which we know can't and won't ever be complete." It's the best way to cause internal politics to fire up and projects to belly up.


Yes you can. There's a basic level of competence in dealing with context around a request. Any decent developer or PM would know what a JSON API is supposed to be.


The act of them doing that alone shows that they either dont get it or that complexity within the organization has brought productivity to a hault. I'd assume it's the second one.


The interface should include engineering.

A responsible developer is expected to ask insistently until the requirements make sense; whoever delivered the JSON DOM is stealing their pay, even if they did it knowingly as a joke or provocation.


You are talking to humans, not programming a machine, you don't have to instruct them on each minimal detailed step to specify something, you expect agency, even more if you are paying big bucks.


Turns out the interface was built by Watson and technically met the spec.


It is because ibm employs ultra cheap devs in India who barely know what they are doing. Ibm made itself into a low cost low quality Indian outsourcing firm but they charge high prices. Worst of all worlds.


And that's why IBM still makes billions of dollars, because we have fools like your client.


Here's how large the remaining inertia of IBM is: they're operating at a level where they can still generate $9-$10 billion per year in net income. That's about 2.5 times more than the most profitable tech company in all of Europe (SAP).


MY GOD that is painful to take in. I feel for you :( . I've shed a tear for a fellow programmer in the thick of the "shit".


IBM is one of the go-to partners for outsourcing tasks of the Australian government, and it's usually a dumpster fire.

- 2016. Australia decided to run a census, population wide (~22 million people), all online. If you didn't enter your stuff you risked a fine. You had one day to get a good 'snapshot'. The service, built by IBM, went down immediately.

'Census outage could have been prevented by turning router on and off again: IBM' http://www.abc.net.au/news/2016-10-25/turning-router-off-and...

'IBM to pay more than $30m in compensation for census fail, Prime Minister Malcolm Turnbull suggests' http://www.abc.net.au/news/2016-11-25/ibm-to-pay-over-$30m-i... (Have they done this? I don't think so)

They said it was an 'overseas DDOS' but there is no proof for that, personally I think it was Australians accessing the page from overseas.

Interestingly, this is not on IBM's Wikipedia page.

- In Queensland, IBM borked the rollout of a new health payroll system so extremely that the state premier banned the state from outsourcing to IBM (and that was before the census catastrophe!!)

>The commission, headed by former supreme court judge Richard Chesterman, tied a number of IBM employees working on the contract bid to serious ethical transgressions, including using leaked information about competitors to gain advantage and attempting to access tender responses by opposing bidders Logica and Accenture.

https://www.itnews.com.au/news/queenslands-ibm-ban-lives-on-...

In the end, the state had to carry the costs (1.2 billion) https://www.itnews.com.au/news/qld-health-ibm-payroll-court-...

It was supposed to cost less than 10 million.

This is also not on IBM's Wikipedia page.


It's everywhere the same story. In Czech Republic IBM has implemented quite a lot IT projects for government. Just for example : There's software for tax management of whole country which is running on mainframes from 1992, it was written in Informix 4GL language there's only GUI is MS-DOS based terminal. all officials need to work with that system.

Of course it was fine 15 years ago but from that time we had quite a lot changes in tax system, many new conditions has been addad and in the result we have unstable deprecated software where maintainability is nowadays near to ZERO.

There are currently no people who knows how it works and who knows Informix 4GL language in 2018... After few years it will be probably replaced but cost of new project will astronomical (...of course).

I am not blaming purely IBM, quite often government officials are incompetent when it comes to software project planning.


What? $ 1.2 billion for a failed project that in 2010 Queensland's government gave IBM a waiver?

This is blatantly corrupt, why the fuck would a government give a company a waiver from their contractual responsibilities on a failed project? That's utterly insane.


If the project fails then the people who commissioned the project would be at fault, therefore these things rarely truly are considered failures


"We learned a lot along the way."


We have run into the same issue before. Many third party services like Optimizely for example just stuff all their tracking information into a cookie, serialized as JSON, which you as the site owner don't really have much control over.

The problem is not that they wouldn't be able to parse the cookie, it's that Akamai edge node web server (Nginx or Apache, don't remember) rejects any request with a header that's above a certain size. Your origin server doesn't even get to know that there ever was such a request.

See https://maxchadwick.xyz/blog/http-request-header-size-limits for example.


While you raise a valid issue, you don't know that IBM didn't also have trouble parsing large cookies. And even if they didn't, your information doesn't exactly excuse IBM for using over 8 kilobytes of data to send cookies back and forth.


I laughed too hard at this. Reminds me of that famous Jim Reekes story:

https://www.youtube.com/watch?v=C5d151lqJsA#t=2m4s

Do you happen to remember what the copy was on that page?

I'm sure it had to be something better than "Hey we made your cookie too big. Can we delete it? Ok thanks sorry about that."

I bet they went with something subtle and vague, make the problem a little unclear and complicated-sounding, maybe take a little credit for Websphere doing a good job in detecting the problem.

Like how I word all my work emails.


I wish I had taken a picture. I don't remember exactly but it was something along the lines of:

"We are unable to process your request due to a problem with your cookie, please click below to delete the cookie"

And then there was just a normal button input.

Their logic around having the user click the button instead of just automatically deleting the cookie and moving on (Something that still shouldn't be necessary if you are using cookies correctly) is that they thought users would be upset at IBM automatically deleting their 'data' without asking. That data just being a bunch of unreadable values for IBMs services.


I find it hilarious that they made this into a talk! There must be something more to it?


How did your company solve it?


At the large company I worked at, it was solved by frequent code audits to any team given subdomain of the company's company.com domain. We were explicitly told to ensure that anything not related to interoperability between company properties was to be set on the ourdomain.company.com subdomain. The team in charge of it had heuristics they used to monitor for unapproved infringements on that policy.

If I decided I wanted to visit one of our offshore dev teams, I could book a first-class ticket anytime I wanted to with no approval necessary. If I wanted $2m for some dev effort, I could have the necessary approvals in a day or so. But if I wanted to write 50 bytes to a .company.com cookie, that was weeks of meetings where I needed to justify my team's existence, the value of the feature we were building and prove there was no other way to implement it.


To be fair, it seems like your organization's solution was to make it very difficult to do so, which is kind of a non-solution (aside from trnctng chars, which is a practice even a small ad tech startup that I was at did).

Admittedly it is a better practice than continuing to stuff you cookie and direct users to a strange page once in a while.


How is that a non-solution? It should be very hard to introduce cookie bloat. Preferences can and should generally be stored server-side. I can't stand it when I lose settings on an account simply because my cookies are on another machine.


Adding cookie data should be a big deal because if everything in the cookie is important valid reasons for needing more data (or less data) imply major architectural changes.

For example, after a merger in which two company websites should be integrated you might start with one simple cookie, transition quickly to two independent cookies on two domains, then use a cookie with two keys when the two sites have separate backends but the same domain, then eventually a cookie with a single key again after extensive replacements and reimplementations.


Seems fairly "conceptually straightforward" (if not so easy): compress data (internal dependencies), use predictions (external dependencies), save states server-side, simply reduce garbage.


Right on the money. We compress the cookie and store the state server side.


nice try, IBM.


I wonder how long it will take for people to realize politics and bureaucracy does not reward merit


Those two departments couldn't possibly be related. It seems unfair to base judgement on the entire company for that.


I dunno ... seems to me an indication of a larger systemic problem: "we couldn't even provide a company-wide engineering solution for this simple problem, so here's how we used a hack..."


That's the problem you get with silo'd teams.

If they had a core technology team they could handle things like this more gracefully. Or maybe not, this is IBM after all.


Exactly, it's IBM. As a company, what they do is so diversified in both level of complexity and technology type that a "core technology team" is just nonsensical.

I mean, in the same company you have on one end of the scale lots of teams doing just outsourced IT support ("my corporate word install is broken, pls fix"). And on the other end for instance the IBM Zürich research lab, who have received multiple Nobel prizes in physics and who developed things like Token ring and Trellis code modulation.


This is probably just me but I've yet to see a company pull off a core technology team gracefully.


Sounds like similar stories I’ve heard about IBM but not related to cookies. Sell first and hack it together later which is very similar to a lot of places. Except IBM doesn’t seem to be able to hack it together.


So we have some more info out there:

- Their PCF solution is deprecated - They were still using a deprecated container management solution instead of kubernetes until recently - If you use Java the for you to use liberty build packs - They push XP and TDD practices from the garage yet many of their consultants don't practice what they preach.


IBM Cloud Foundry (formerly Bluemix) isn't quite deprecated but the container manager inside CF was stuck on the deprecated DEAs rather than Diego long after Pivotal and others moved on. They moved to Diego last year I think but very slowly.

And then they rebranded IBM Spectrum Conductor for Containers (look it up!), a Kubernetes snowflake they came up with, into IBM Cloud Private, because Kubernetes will solve all problems, the issues MUST have been leading with a PaaS and not a CaaS, right?

Other observations are spot on

Disclaimer, I compete with IBM and have to deal with their account team shenanigans, though parts of IBM are better than others and even can be good partners.


Don't forget their big "blockchain" push (via TV commercials).


If they don't want the company suffering reputation damage by failings in other parts of the company then they should split or at least run under different brands.

If they want to accept the positives of the IBM brand then they need to accept all the negatives that come with it. Unfortunately for them that brand is now toxic.


"Offering managers didn’t have technical backgrounds and sometimes came up with ideas for new products that were simply impossible."

Sounds like they drank their own kool-aid, e.g., "Products That Enhance and Amplify Human Expertise," rather than understand the actual limitations and possibilities of ML. And it seems to me that they're still doing it with this nonsense about a human-level "AI" debating stack.

The oversell seems a real shame in light of how much good can be done with EMR and machine learning / NLP.


That's surely part of the problem, but the catalyst is the marketing strategy that is used to brainwash the employees. In essence; sell the experience, not the product.

This works well for IBM generally (the products are shit) but especially well for Watson because it's extremely easy to sell AI without getting bogged down in details. You want to identify brain tumors? We'll just teach Watson to do it.

Whilst IBM research might be able to pull it off, it'll never get to market because there is nobody capable of making good products at IBM anymore.


> Whilst IBM research might be able to pull it off, it'll never get to market because there is nobody capable of making good products at IBM anymore.

As an ex-IBMer this is so true and so frustrating at the same time. Engineers are thrashed about on a nearly sprintly basis by PM's with short attention spans and no understanding of how disruptive their continuously changing requirements are.

It doesn't help that IBM consistently puts the cart before the horse is even born and pivots multiple teams all at the same time such that nothing you build upon is stable or consistent. Working there was maddening.


I think this is just what large multinationals do these days.


Sounds like the way non-tech companies operate. Pretty damning that IBM can't understand why leadership of a tech company should have a tech background.


> This works well for IBM generally (the products are shit) but especially well for Watson because it's extremely easy to sell AI without getting bogged down in details.

The cynic in me says that every use of the term AI in any capacity is to sell experience and not functionality. When was the last time you used a product billed as 'AI' and thought 'wow, this is a huge game changer'? Siri is cool, but it's ultimately not super useful. Google translate is incredible, but it can only do what it can do because of the absolutely mind-boggling amount of training data that google can access. Most disciplines have the problem of not enough data, despite what 'big-data' folks say. In contrast, humans can extrapolate and make reliable predictions about the future based on really small sample sizes. We can pick up a new skill or recognize a new pattern with a high degree of accuracy really effing fast compared to a computer. This gives humans an enormous advantage. If IBM and anyone else in this space were really focused on delivering excellent real-world results, step 0 is building out world-class data integration and search tools (which we still actually suck at, weirdly.)


I use & depend upon plenty of products that are built upon AI - GMail spam filtering & categorized inbox, Google image search, YouTube & Netflix recommendations, cheque OCR at my ATM, predictive keyboards on my phone, Amazon's "people also buy with this product" feature, Google translate, computer opponents in games that I play, and all of the signals that feed into Google Search.

The irony is that not one of these bills itself as AI. It's just "a product that works", and the company that produces it is happy to keep the details secret and let users enjoy the product. So you may be right that the term "AI" itself is pure salesmanship. When it starts to work it ceases to be AI.

https://en.wikipedia.org/wiki/AI_effect

Also - humans only look like we're fast at picking up new domains because we apply a helluva lot of transfer learning, and most "new" domains aren't actually that different from our previous experiences. Drop a human in an environment where their sensory input is truly novel - say, a sensory deprivation tank where all visual & auditory stimulation is random noise - and they will literally go insane. I've got a 5-month-old and a project where I'm attempting to use AI to parse webpages, and I will bet you that I can teach my computer to read the web before I can teach my kid to do so.


>The irony is that not one of these bills itself as AI. It's just "a product that works"

I think you are on to something, put differently: If you need to use the term "AI" to enhance the marketability of the product it is probably because the product sucks.


Companies also do that when trolling for investors.


And employees. Google's embrace of the term "AI" isn't because they need help developing or selling AI-powered products, it's to encourage all the kids to go into computer science and all the existing developers to learn TensorFlow. They can then pick off the best of them as potential employees without having to train them up themselves.


None of the things you mentioned are even close to AI. They’re applied statistics, and they mostly use techniques we’ve known about for decades but have only now found a use case because computing and storage is cheap enough to make them viable.


The recommendation, translation, & image classification algorithms are all done with deep-learning; that's considered AI now.

There was a time, not all that long ago, when SVMs, Bayesian networks, and perceptrons were considered AI. That's behind the spam filters, predictive keyboards, and most of the search signals.

There was a time, a bit longer ago, when beam search and A* were considered AI. That's behind the game opponents.

As the linked Wikipedia article says, "AI is whatever we don't know how to do yet." There will be a time (rapidly approaching) where deep learning and robotics are common knowledge among skilled software engineers, and we won't consider them AI either. We'll find something else to call AI then, maybe consciousness or creativity or something.


This is my point: the term AI has always been BS. It was BS when beam search was AI, it was BS when expert systems were AI, and it is equally as BS when applied to neural networks. It comes to the same thing: the 'AI' tools we use are increasingly good function approximators. That's it. It's still reaching the moon by building successively taller ladders.


This is my point: the term AI has always been BS

I don't see how this follows from this:

the 'AI' tools we use are increasingly good function approximators

Nothing in the definition of AI says that AI has to work the same way the human brain does... and as far as that goes, we're probably not 100% sure that, in the end, the brain is anything more than a really good function approximator and some applied statistics.

I would say the canonical definition of AI, to the extent that there is one, is roughly something like "making computers do things that previously only humans could do". If people think "AI is bullshit" I'd say it's because they're applying their own definition to the term, where there definition imposes much more stringent requirements.


I think Judea Pearl would agree with you in part. From an interview in https://www.theatlantic.com/technology/archive/2018/05/machi... :

As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.

And

I left the arena to pursue a more challenging task: reasoning with cause and effect. Many of my AI colleagues are still occupied with uncertainty. There are circles of research that continue to work on diagnosis without worrying about the causal aspects of the problem.


> There was a time, a bit longer ago, when beam search and A* were considered AI.

Yeah, maybe the marketing people said that..


Well in game design you still call them AI. Of course this use of AI does not impress anyone per se, it's just a name and not a selling point.


This is an interesting comment - where would you draw the line between AI and applied statistics? A lot of AI which happens to be ML (not saying there is non-ML AI, just that a significant chunk of AI being practiced today is ML) also happens to be applied statistics. Or have statistical interpretations.

Also, because something has been around for decades does not make it not AI. For ex the cheque OCR mentioned probably runs off (or can feasibly run off) of a neural network. I think the parent's comment holds well - not sure about the last line though ...


The line is clear: everything today branded "AI" is just applied statistics. AI is a buzzword. I don't know what the definition of intelligence is, but I have a feeling it doesn't rest anywhere near concepts like function approximation, and that's all even the most sophisticated "AIs" at Google or Facebook or Apple boil down to.


What was not clear from your earlier comment, and is now, is that when you say AI you don't mean AI as is practiced by most of academia and the industry but the vision of Artificial General Intelligence (AGI). If so, yes, that's a good point to make. However, it is debatable whether the path of statistical learning wont lead to AGI, or is not how our brains function, or the truth partly does comprise of statistical learning and part of something else. The Norvig-Chomsky debate is an example of the arguments on both sides.


I didn't make an earlier comment. You're replying to my one and only comment.

> when you say AI you don't mean AI as is practiced by most of academia and the industry but the vision of Artificial General Intelligence (AGI).

What I actually mean is people practicing what they call "AI" in academia and the industry have co-opted the name to make what they do sound more interesting. First it was called "statistics". Then it was called "pattern matching". Then it was called "machine learning". Now it's called "AI". But it hasn't changed meaningfully through any iteration of these labels.


Oops, sorry!

To your point, I agree. They hype around the area has evolved much faster than the area itself.


If you can definitely a problem rigorously, you've essentially defined a function. So "function approximation" is basically "general problem solving approximation".


I don't really think that characterization is fair, for example GANs, there is no data set of correct input output pairs for the function that is learned.


techniques we’ve known about for decades

FWIW, "AI" as a field has been around since the 1950's. So calling something "AI" in no way implies that the techniques are especially new.


> None of the things you mentioned are even close to AI.

then what is 'close to AI'?


Something that actually learns on its own and is not completely stumped when it encounters something new but actually learns. When it recognizes failure it should go and start learning by itself, i.e. try to get more data and analyze that and do its own trial and error - so that it actually grows in capabilities (on its own).


This excludes most human intelligence.


You speak for yourself?


I would argue that all of your examples have failed to be anything even remotely resembling AI, just data crunching to fit most use cases. I don't use GMail but I do regularly use Google image search, Translate, YouTube, Netflix and predictive typing via SwitfKey. And IMHO they all suck horribly (SwiftKey still sucks pretty bad after 8 years of learning from me). Google Translate is getting better and I have recently started using first-pass Google Translate before correcting the mistakes... instead of everything by hand. YT/Netflix Recommendations are always bullshit. I wish there was a way to say "never show me anything like this ever again" because I often feel like 90% of the recommendations make absolutely no sense. Sometimes I think that someone else must be logged into my account clicking on things just to mess with my recommendations. I usually spend a minimum of 30 minutes searching, often giving up out of frustration (and I always have an IMDB tab open to check details because all of the IMDB rating plugins for Firefox stop/ped working). Maybe I'm an edge case living outside the U.S.? Are their algorithms only tuned for English-speaking countries?

The most creative, intelligent and least frustrating "AI" I've ever encountered was in some games, such as Dota2 or many years ago F.E.A.R. They were frustrating but only due to unpredictability, even after hundreds of hours of playtime. YouTube and NetFlix AI after hundreds/thousands of hours invested are also very unpredictable and frustrating, but that's the opposite experience I am looking for in those situations.


Completely have to agree. YouTube has so much content, far more than Spotify, Vimeo and everybody else in the space, which is why I use it. But the recommendations are an offense. YT is only good at 'recommending' stuff I already watched or listened to. What's the point?

Translate can be useful at times...like once a year when I want to comprehend a Japanese website, usually I close the tab after 2 minutes.

I used GMail for many years and still do to some degree but I'm moving to a different mail provider. GMail's spam filter is great!

Not sure, since 2 years it became acceptable to make no difference between ML and AI. ML appears smart because of bizillions of training samples and I feel very impressed when I hear of that. But yeah, at the end of the day it doesn't have exactly the biggest impact on me... ;)


I have always referred to computer opponents as "AI".

YouTube recommendations aren't great, they have a short memory and my feed is rarely diverse, it just shows a bunch of whatever I just watched.


> humans can extrapolate and make reliable predictions about the future based on really small sample sizes

You severely underestimate the bandwidth of your eyes and ears and other senses, and the volume of your brain's memory (despite it's uber-loosy compression). That's terabytes a day probably, if not big data than I dunno what is. Yeah, 99% of it is thrown away at passing through the first few hundreds of layers of your neural networks, but they still know what to throw away...

To get a digital computer on "equal" terms with the zillions of hacky optimizations your semi-analog brain uses you need a shitton of raw power and data volume ("if you don't know what to throw away of the input data, you need to just sift through all/more of it") to compensate for the fact that you don't have N million years of evolution to devise similar hacky optimizations.

Also, humans work as a "network of agents", that's also recurrent (aka "culture"). Current sub-human-level AI agents are far from any sort of reliable interop.

My guess is that we'll get human level performance levels at AGI tasks when we learn to build swarms of AI-agents that cooperate well and "model each other", and few people are working on this... Heck, when it happens it will probably be an "accident" of some IoT optimizations thing, like, "oops, the worldwide network of XYZ industrial monitoring agents just reached sentience and human level intelligence bc it was the only way it could solve the energy-efficiency requirement goals it was tasked to optimize for" :)


You severely underestimate the bandwidth of your eyes and ears and other senses

This is so common there’s a term for it https://en.m.wikipedia.org/wiki/Moravec%27s_paradox


> You severely underestimate the bandwidth of your eyes and ears and other senses

Sample size and record size are two different things.


In fact, I would say apart from maybe self-driving cars, almost all of the biggest gains from machine learning are in unsexy, hidden backend problems, like automatically rectifying disparate data, optimizing resource utilization, flagging difficult-to-articulate events or triggers in a stream of data too large for human evaluation, machine translation, and other “unsexy” things.

Product interfaces usually offer simple features to users and the value proposition is easy to see. Effective use of machine learning is well hidden upstream in a bunch of unsexy preprocessing or heavy lifting to get to the interface. Not something you’d ever need to emphasize in marketing, except maybe at tech meetups or in recruiting materials, but not to the end consumer.

It just makes pop references to AI-powered products more egregious.


Many people mention even a worser search experience with Google and AI


> We can pick up a new skill or recognize a new pattern with a high degree of accuracy really effing fast compared to a computer.

Evolution by natural selection is the OG genetic algorithm, and it's been "running" on billions of organisms in parallel for hundreds of millions of years. The intuition that we take for granted such as the abstract concept of a shape is all hard-coded in our brains from trial and error.


> Siri is cool

No. Siri is shit.


is anything in the category of siri-like things not shit?


Over the past 10 years I’ve been surprised when anyone smart would join IBM. I understand why people hung on, but why join that dying ship? Now I can’t think of any still there.


Who says IBM is dying? They aren’t anymore (since a long time) at the bleeding edge of research, but they still have a solid consulting and integration business. They also offer decent salaries and good opportunities for sales-oriented technical people. It wouldn’t be my first choice of employer, however I can see how some people would enjoy working there.


Dying may be extreme. Perhaps it’s safer to say that they are in the mode of returning capital to investors rather than growing.

I’m at a large customer of theirs and they are bleeding the customer for every dollar as they get phased out. Very low caliber of services professionals too.


Agree. Consultancies rarely sell products or solutions. Instead they sell project management by making it sound like you are a more safe bet than a smaller product studio who actually do make products work. Its a real shame but i mostly blame the zero mistake KPI culture primarily fueled by how managers on the client side are promoted.


There's a pretty good podcast interview with Eugene Dubossarsky that has relevant discussion about issues with management and data science in general. https://www.datafuturology.com/podcast/1

Here are a few of my notes (my words not the interviewee's):

- in order to use data science, you have to have creative people thinking about data on the front end

- they don't have to be data scientists, but they need to be creative and want data to support decisions and iteration via feedback loops

- that creativity and desire will lead to "doing good data science"

- management on the receiving end of data science output must be intelligent in terms of synthesizing many inputs and have a strong desire to puzzle through the implications. If management is asking the data science to actually make the decisions - the situation is broken

- data science must be done with provisions for decision support and feedback loops; this is the output that is helping drive the business.

- Lack of desire for decision support and feedback loops leads to "fancy pets" and management using data science as a means to brag about what they are doing; but the data science might not being doing anything to drive the business meaningfully.

- data science that attempts to actually make decisions vs providing decision support is likely in the category of "commodity data science". Corollary : non-commodity data science is the kind that supports decisions in executing higher-level business strategy. Strategy at that level has rather unique attributes and is embedded in unique circumstances for a particular business. This requires a good data scientist to help tackle.

(hope this is useful)

(edit typos,grammar)


BTW - in listening to that podcast I found a lot of parallels with database design.

whenever I'm asked to design a database for an early-stage system (I work in early stage tech ventures), I ask the following:

- what are the questions that this database should answer for you? How are those questions supporting your business goals 3,6,12 months out? (I'm trying to get to the business requirements here)

- who will be asking those questions (I'm trying to put together some user personas in my head)

- how frequently will they be asking these questions? corollary: how often will historical data be needed? (I'm thinking hot vs cold and complexity of retrieval, minimally required performance)

- how much data to we anticipate is needed to answer the questions (this is really tricky in new ventures - often the answer is more data than what will actually occur in practice in the first year)?

- finally, what systems & tools are people using to ask the questions and be notified of events? (I'm thinking about interfacing, apis)

its all an attempt to stay very focused on the questions and business drivers and the people who use the answers.


This is a great summary. I think his guideposts are helpful for most decision support initiatives, whether you're using data to try and support decisions, or reaching out to humans.

We run prediction markets inside companies and find that if we don't establish a good lifecycle of asking forecasting questions, having people respond with probabilities, then decision makers REACTING to those probabilities in some way (whether they agree with them or not, just acknowledge their existence) the likelihood of the project failing is far higher.


It's double whammy for the Watson stuff, because not only did IBM lose track with AI, they also had that whole cloud thing whoosh by them. So not only do you have marketing telling outrageous lies about the abilities of their AI systems, they're also exaggerating the cloud angle where Watson is a cornerstone.


Possibly the most outrageous lie was calling it AI in the first place.

Watson is a $@#% amazing information retrieval system. Information retrieval is only a small part of what people think of when they think, "AI".


Well, what people think of when they hear "AI" is generally pretty different from what academics mean when they say "AI" anyways.


> amazing information retrieval system

Which I imagine still has lots of value on it's own for large, complicated data sets.


Yep. But if your marketing and product management are all focused on selling AI instead of IR, then they're not really working toward finding a way to deliver the IR value they have to the people who need it.

I'd actually like to give Watson a spin for an IR problem I'm looking at, but, thanks to their hype machine being set to overdrive, they've got the thing priced in the "The Bold Leaders of the Future Creating a Bright New Tomorrow Full of People in Glasses Staring Wistfully Toward the Right Edge of the Photograph, While Blue Curvy Streaks Wave Through the Background and Random Zeroes and Ones Float Around Their Heads" tier. Sadly, I've only got a "businesses solving business problems" sized budget.


What is your use case and what are the alternatives you're considering? I'm trying to understand what to imagine here.


IBM BlueMix didn't have the Watson IR tech you were looking for?


Watson is a thing that is good at Jeopardy. Therefore it is not AI. AI is only things that a computer can not do. Nothing that exists is ever AI


Jeopardy is an information retrieval problem in a game show format, with the minor twist that the query is phrased as a declarative sentence and the response is phrased as an interrogative one.

AI is not simply things that a computer can't do yet. But I think most of people who aren't currently trying to sell a piece of software would expect AI to include some things that you don't need to do to play Jeopardy. I'd want to see general-purpose pattern recognition, for example.


IBM actually has a cloud, but they make it so difficult to buy that it’s not really worth it.


Other reports seemed to indicate that Watson (and maybe all AI) requires a lot of very careful, slow, and somewhat arduous data entry and testing to get good results.

I can imagine someone who doesn't know at IBM selling a product:

"Hey we will solve all these problems like magic!"

Then IBM comes back:

"Hey do you have all this data in a specific format and a ton of time to enter and test it and maybe we'll get back to you???"

That's a big loss of trust there with the customer if you come back with that.

It seems like these are products where a lot of caveats needs to be made clear to customers and a real careful technical partnership formed with them to succeed long term. You have to bring the customer along for the ride and exploration and keep them excited for a long time it sounds to make it work.


Are the articles at ieee.org usually technically competent?

To see one of their articles with the common press confusion of mixing different definitions and interpretations of AI (correct or incorrect) doesn’t help build confidence.

For example, what was used to play Jeopardy vs. approaches being taken to improve cancer treatment, are just so different, it seems almost disingenuous to throw it all haphazardly into one conceptual bucket.

The article does IBM a disservice in some ways. They come off looking bad overall but some of the failed projects mentioned like MD Anderson, failed for reasons beyond any control they had, other than recognizing some obvious red flags earlier and detaching their name and participation from it.

On the other hand I believe the article lets them offf the hook to easily when they bring out the old trope they’ve been using for years, which is encapsulated here:

“IBM Watson has great AI” [one engineer said] “It’s like having great shoes, but not knowing how to walk—they have to figure out how to use it.”

It doesn’t make sense to say, xyz is great we just have to figure out how to use it, as a stand alone argument. It’s nonsensical unless you mention something about the seeming implied untapped potential, specific innovations, novel approach, or whatever makes it great.

I’m not familiar with all their IP so maybe there are some great things, you just don’t get to claim that and get off the hook so many times in the press without providing at least some detail or reference point.


The article only mentions the engineers being laid off, not the managers who botched it, nor the executives who hired with incompetence. The reward structure of such a corporate environment would seem counter-intuitive to success.


>"Offering managers didn’t have technical backgrounds and sometimes came up with ideas for new products that were simply impossible."

how many of those managers will now be able to get even a higher paying job because they have manager of Watson AI project on their resume?


IMO, this is a bigger issue in the industry. People are hyping up ML/AI to the point where the actual application is either impossible or extremely difficult. Just look at how many people are so fearful of AI/ML taking away jobs and replacing humans. Anyone in the field knows that AI/ML can take away jobs but it is more of the low-end jobs and everything happens gradually rather than immediately. AI/ML is being more as a tool to enhance human productivity rather than as a direct replacement for entire occupations unless the job is very basic to begin with.


IMO AI/ML scientists are drastically underestimating the complexity of biology.

Once scientists in biology and healthcare get on board like they are in linguistics and computer vision I'd expect things to pick up.


agree


following is a text snippet from the Article . I do not know which part of the following is AI , it is pure "data analysis" writing few SQL queries . This is the Biggest problem of IBM, they bill these kind of things as AI .

> A clinic could use the system to search its patient records and find, for example, all the men over age 45 who were overdue for a colonoscopy, and then use an autocall to remind them to schedule the dreaded appointment.


Being directly involved in the execution of the example you gave, I believe ML would be unnecessary and error-prone.

Even if successful, a system which could "interpret" a health record (such as a freetext note) using anything other than properly codified data would set the health industry back a decade. Moving doctors away from freetexting their notes is the only way to advance the industry.


This is something I've always wondered about- how would you do it? How do you break the cycle of freetext EHR notes?

Doing so could have incredible utility for sharing data across various clinics/hospitals/pharmacies/etc.


It will be a generational shift. You won't get current doctors (over 40) to change their ways, ever.

My previous doctor (who was probably mid-50s) didn't use email or any kind of secure electronic messaging system. Everything had to be faxed to him.

My new doctor who is younger uses all kinds of digital tools including a voice recorder with a pre-trained text-to-speech engine that understands medical terminology and codifies the transcription based on keywords.

So it's not entirely getting away from freetext but at least it's extracting some structured data from it automatically.


The point is it's NOT SQL. It's NLP and whatnot to extract value from relatively natural non-fully schematized data.


DOB and date of last checkup are highly schematizable. This specific example is very well-suited to a SQL solution dumping into an autocaller queue.

Maybe this was a terrible example, and the author didn't grasp a good example of legitimately non-schematic data points?


It has been ever thus with IBM adverts - this was me moaning online from early 2000's

http://www.mikadosoftware.com/articles/ibmadverts


That's great! Thanks.


That is my biggest gripe with this as well. We certainly don't want an another AI winter, especially not in health, where I'm hoping (perhaps too optimistically) that it will allow better and cheaper care.


   that it will allow better and cheaper care.
That will happen, but you have to think 20 year plans, not 5 year plans.


Whats EMR? (definitely not Elastic Map Reduce)


Electronic Medical Record(s)


Isn't this just the same problem IBM has had for...well...most of my adult life?

I had a contract gig at IBM Advanced Technology in Boca for about 18 months in early 2001-2002. Talk about missing the boat...

I was brought in to prop up a soon-to-be-failed project for the Japanese government...basically a "Napster for Tokyo" that would allow paid-for-play C2C song sharing for customers of "the Big 5" record companies.

I asked simple questions that no one could answer...why would people pay for content when it was so easily available via other means? you are using DRM how??? really? you need a special player to play the music?

Why would anyone do that?

I stayed for a few simple reasons...the fat consultant check I cashed every Friday. Exposure to some outstanding engineers and coders where I got to learn from true talent. The great strip club on A1A next to my rental in Lauderdale-by-the-Sea.

But reading over this article reminds me that IBM is just too big to get out of its own way, and has been for the longest time.

[edits]


Getting a nice check while doing nothing actually productive is soul crushing. Good for the short term but it's a Chinese water torture in the long run.

Two years ago at their vegas conference they had a coffee shop that used AI to recommend coffee types. I thought "boy, they don't understand this technology".


Many years ago, when AI was expert systems and "neural networks" were fringe, the main demo for one of the public expert system leaders was the Wine Advisor. You'd tell it what you were going to eat and it would recommend a wine.


And behind the scenes it was probably the equivalent of flipping a coin between red and white.


It was totally rule-based. More complex systems had a little more probabilistic stuff via Bayes and "certainty factors", but not this one.

I worked on another one for this company called Vibration Advisor which diagnosed odd noises in GM cars.


Being rules based isn't necessarily a bad thing or disingenuous. I develop healthcare AI products (ML/DL researcher) and we actually aim to be able to translate our models into a rules based engine (find a strong signal, interpret/understand model well enough to translate/embed into a rules engine, look for a new signal in our models, rinse + repeat). We end up deploying a mix of rules based and true ML based models into production but it may not be immediately obvious to the end user which type of model they are using.


I didn't mean it as being disingenuous - that's precisely the value that was sold and if you could do the proper "knowledge engineering", it worked well. It's just interesting to me having seen the previous turn of the AI hype wheel, how much is being repeated.

Another interesting thing was the transition from special purpose hardware - Lisp machines - to C code on commodity platforms. A contrast from today's ML moving in the other direction.


That's fair. Google's recent paper on predicting patient deaths is another good example of this (logistic regression + good feature engineering performed just as well as their deep learning models, and the logistic regression has the added benefit of being significantly more interpretable and as a result, actionable).

It'll be interesting to see when specialized ML focused silicon will become readily available. Right now I find ML libraries that are able to run on blended architectures (any combination of CPU and GPU's) much more exciting/impactful than TPU's. The ability to deploy on just about any cluster a customer may have available is huge.


In the near future customers don't have clusters, cloud providers offer elastic adaptive compute sharing.


From my experiences (currently work with several Fortune 100 health insurers/benefits managers, and have previously worked for another large insurer, a major academic medical center, and a large pharma company), healthcare organizations tend to be rather cloud adverse (most of our contracts very explicitly forbid us from using any form of 3rd party cloud computing). So while I agree that much of the heavy lifting will shift to the cloud (or already has), I expect health analytics will continue to favor on-premises solutions (GPU’s still tend to be pretty rare compared to CPU based clusters but are slowly becoming more common).


As someone in the field, what do you think about the idea of a fully automated "doctor"?

Are we close to it being technically feasible , leaving aside regulation and interpersonal qualities doctors bring to the table ?


Depends on the definition of "doctor".

The likes of INTERNIST, CADUCEUS, and MYCIN have been around and provably accurate starting in the late 70s through the mid-80s. MYCIN even arguably sparked the 1st AI boom. But there were ethical issues with computer-aided diagnosis that I'm not sure have been solved/overcome.

Perhaps the current startup generation can get past them with Zuckerberg, Kalanick and Holmes as role models. :)


It's funny how much complex "AI" really comes down to If and Switch statements. "Utility AI" is popular for videogame AI right now - it's weighted switches.


Diagnosing vibrations is all the rage right now, it's just rebranded under "predictive maintenance". The Industrial Internet of Things crowd is all hyped up about it.


This is the same thing done by a startup that is current spamming Reddit with ads.


What about getting a nice check while thinking you're doing something actually productive which ends up, in the final analysis, being meaningless? Alternatively, what about doing something that seems soul crushing and pointless but is actually productive?


I'm living this right now and I hate it so, so much.


Nearly everything about working in modern corporations is terrible, so a good paycheck and not-too-stressful working conditions are close to the best that most of us can hope for.

Of course, there are those who make their way into the small clique of people at any tech company that get to do impactful, fulfilling work. But any given person is unlikely to be one of them, and if you want to be one of them you usually have to kill yourself working crazy hours first. (And probably afterwards too.)


I heard this joke years before working at Kaleida Labs (a joint venture of Apple and IBM), but it rang true while I was living the joke:

Q: What do you get when you cross Apple and IBM?

A: IBM.

While I was working at Kaleida, I gave a wild ScriptX demo to Lou Gerstner using a bouncing eyeball to navigate a map of interactive rooms. After the demo, he complained that "The eyeball was a little too right-brained for me." I was all "Dag nab it, I should have used the other eyeball!!!"

https://medium.com/@donhopkins/1995-apple-world-wide-develop...

"The last thing IBM needs right now is a vision." -Lou Gerstner

https://en.wikipedia.org/wiki/Louis_V._Gerstner_Jr.


> IBM is just too big to get out of its own way

Brilliantly put

Funny thing is, when it did get out of its own way, what did we get? The IBM PC


...A very successful platform. Just not for IBM.


"Well, everybody knows the important part is the hardware. There's a Mr Gates from a company called Microsoft that can sell us the software"


Well yes, that's because they immediately tried to differentiate themselves with fun stuff like Microchannel Architecture to try and make their machines proprietary again.


I stayed for a few simple reasons...the fat consultant check I cashed every Friday. Exposure to some outstanding engineers and coders where I got to learn from true talent. The great strip club on A1A next to my rental in Lauderdale-by-the-Sea.

Heh... I did some work for IBM at the office of Cypress Creek Road back in that same time range. My fondest memory of the entire experience is eating at the Calypso Restaurant[1], a great Jamaican / Caribbean Islands place nearby.

I'd almost go back to Fort Lauderdale and work for IBM again (if they even still have a presence there) just for the Jamaican Jerk chicken from Calypso.

[1]: http://www.calypsorestaurant.com/


I always thought Watson Health would be like a smaller spin-off product that wasn't influenced too much by IBM. T

Given all the expectation of its product and the importance of ML/AI, it makes sense this was impossible.


This stuff is the friggen worst if you ask me. i'm so done with the AI hype train and IBM is the worst. Linked article mentions a 'watsons law' (similar to moore's law etc). If you ask me, it is more likely for watsons law to be that all commercial BigCo 'AI' offerings will burn thru hundreds of millions and ultimately fail rather then the intended meaning.

"Phytel’s contribution was analytics paired with an automated patient communication system. A clinic could use the system to search its patient records and find, for example, all the men over age 45 who were overdue for a colonoscopy, and then use an autocall to remind them to schedule the dreaded appointment"

This shit isnt AI it's literally a database query and then some 3rd party library to send a text message or a phone call.


Watson's law: as the complexity of technology and business processes increases, the amount of time it takes for people to recognize and acknowledge that the emperor in not in fact wearing clothes increases in proportion to the profitability of the lie being sold.


"The Emperor's New Clothes" are made of the finest blockchain silk adorned with golden AI brocades.


AI as a research field generates plenty of useful applied techniques that make real products possible. The problem, which Watson is ground zero of, is calling AI itself a product -- like something you can just mix in and make business processes better.


> If you ask me, it is more likely for watsons law to be ...burn thru hundreds of millions and ultimately fail rather then the intended meaning.

That’s already called Musk’s Law.


Seems like a strange name given none of Musk’s companies have failed?


In before


IBM was simply an AI hype train grifter. AI is still valuable (although it's not advancing as quickly now as it was in, say, 2015), it's just that you have to dig deep to make sure that the "AI" being used is actually AI and that it's being used in an appropriate way.


AI seems more and more, in communication to the general public, to mean "computer program that does something useful, that hasn't yet been commonly called a computer program."


Billions. IBM has spent billions.


The problem at IBM is not technological; it's managerial.

For a long while now, IBM has been treating "AI" as a product that can be managed, packaged, and sold by "general" business managers -- think MBA-types with only a superficial, qualitative grasp of deep learning and AI. Doing that with rapidly evolving technology is a sure-fire recipe for failure.

Most such MBA-types today are ill-equipped to manage, package, and sell "AI." They're roughly in the same position as English or History majors who are asked, say, to manage, package, and sell a new kind of quantum-computing technology without knowing or understanding much about quantum physics. The technology is moving faster than their ability to keep up.

IBM's mismanagement is a shame, because the system they showcased nearly a decade ago -- the one that competed and won in Jeopardy -- was state-of-the-art at the time.


It's the entire company, not just these Watson groups. They have a history of buying companies or taking over I/T departments and re-badging employee's only to ax nearly all of them once the knowledge is transferred to some 3rd world country workforce.

Stories like this are not a surprise, it's IBM's way of doing business. Maybe Watson will learn HR and just fire everybody from middle mgmt on up..


As of 2018, after seeing how terrible “engineer-types” can be at engineering management, the MBAs are starting to look better to me.


As an MBA who browses HN, I'm torn between the two opinions


As an engineer who has an MBA from HSW, I'm even more torn.


erikpukinskis, airstrike, jeffjose: I did not criticize MBA's in general!

My comment mentioned specifically "MBA-types with only a superficial, qualitative grasp of deep learning and AI."

MBAs who understand what they're managing (and who know what they don't know) are not in that group. And BTW, I suspect most MBAs who read HN are not in that group either :-)


Not sure how to directly link to the post on HN, but I previously posted a (too long) comment that hits on some very similar points: https://news.ycombinator.com/item?id=16960675


MBAs vs. undergrads with liberal arts degrees is a whole other discussion. An argument can definitely be made for a liberal arts skill-set in terms of adaptability and capacity to quickly learn other skills/subjects. Not sure if that encompasses something as complicated or esoteric as the technicalities of AI/ML, though.


To be honest, I would rather work for someone who just got an undergrad liberal arts degree, who is aware they don't know technology. They can learn, and there's plenty of stuff about communicating with (and for) customers that I don't know or don't do well, so it can work well. Which is all theoretically true of an MBA too, but...


> They couldn’t decide on a roadmap,” says the second engineer. “We pivoted so many times.”

> Both Phytel engineers say the offering managers didn’t have technical backgrounds and sometimes came up with ideas for new products that were simply impossible.

The death knell of all (potentially) good products. I don't know why this is so often the case. All software companies need engineers involved in product development decisions. Period. It's not optional.

Facebook who was smart about this. They hired or retrained technical people to fill many business roles in marketing, product development, project management, etc.

I'm not sure why technical people are restricted to merely being the builders in these companies. Lots of other companies recruit internally from people familiar with the end product and train them in other business areas.

> these potential customers weren’t impressed. Instead they asked for something resembling Phytel’s old system.

So they simply imagined a new product without interviewing potential customers beforehand on what they actually want? They spent years merging databases of two big systems, pivoted multiple times, to find out there wasn't a market for it in the first place?

Why aren't the 'offering management' people getting fired?


A little case history - there were two illuminating threads ~10 months back where several current & former IBM employees commented on the growing disconnect between the reality vs. marketing of Watson - looks like vindicated by today's news:

(1) https://news.ycombinator.com/item?id=14979642

(2) https://news.ycombinator.com/item?id=14766793


I've been saying Watson is a red herring for years: https://news.ycombinator.com/item?id=11262397


The health care landscape is strewn with the wreckage of software companies who thought the latest shiny software doodad could cause a "disruption" like it had in so many other industries.

People who know that healthcare is different try to warn them. They don't listen. Instead they charge in with people who have no experience in the field.

From the article:

After the acquisition, IBM management started the process known internally as “bluewashing,” in which an acquired company’s branding and operations are brought into alignment with IBM’s way of doing things. During this bluewashing, “everything stopped,” the first Phytel engineer says, and the workers were told not to focus on improving their existing product for current clients. “People were sitting around doing nothing for almost a year,” the second engineer says.


Than Tharanos decided to jump onto that bandwagon and make a dumpster fire


IBM's organizational structure is book-ended with great talent. The engineers, developers, and even front-line managers are really fantastic, and the people at the very top are pretty good.

In the middle, there are 100 layers of middle managers that completely cock everything up, and the really sad part is that they have enough say to really cause damage. One of my first proper white collar technical jobs with them was an L2 support job for this network performance monitoring suite for huge networks... mostly large, national ISPs and the like. The job required maybe a just-post-jr-level sys-admin knowlege of networks and UNIX systems while also having smooth customer service skills. Definitely a great step up from my previous lower-mid-level IT jobs and call center work.

I had three(3) managers. Three! I had a technical manager, a non-technical manager, and my actual manager, who was the head of the department.

At the highest levels, the management was talking about switching everybody's workstation over to Linux. Everybody from admin assistants to developers to managers was supposed to be moved off of Windows at some point in the relatively near future. I was psyched— I hated windows, and the product I supported ran on Solaris, so not having to deal with the extremely primitive (at the time) tools like Cygwin to get some UNIX functionality on my machine was great. They seemed to be positioning themselves to sell the consulting for other large companies to do the same thing.

Though we got no word of this internally— I only knew from what I had read in articles— I found the internal workstation disk image on the intranet and eagerly installed it. It was pretty smooth! I was excited! As I was getting my tools set up, I noticed that it didn't have the internal bug/ticket tracking clients installed, so I cruised on over to their intranet page... hmmm, nothing listed for Linux. After hours of searching, I found some internal discussion showing that, months earlier, the department that writes that software unilaterally decided that they were discontinuing their initiative to port those applications to Linux. While there was an extremely limited CLI to these tools, critical functionality was literally impossible without the GUI app. Without the ability for anybody on their Linux workstations to interact with tickets or bug reports, the Linux initiative was pretty much dead-in-the-water for most technical people and their managers.

Perfect example of just how badly their forest of middle managers completely messes up great executive initiatives that the bottom of the food chain really wants to embrace.

(I might have gotten some of the details wrong. It was 13 or 14 years ago and I drank a lot back then.)


I'm in my final week at a large company and so much of this article rings true for me as well. I feel like the structure required to coordinate really big companies has some really negative emergent properties that make it very difficult for such a company to be efficient and innovative.

Look at a company like Google, which, without a doubt, has some of the best engineers in the world. How many false starts and just flat out poorly executed projects/products have they had in the last 10 years? Way more than you would expect from a company that puts such a premium on hiring the best.


It only reveals IBM's problem with AI if you were at any point under the impression that it was something more than marketing for them and watch actual market signals.

The audience that gobbles up their ad campaign during the Masters that touted their "block chain" logistics probably wouldn't even notice that they had layoffs at Watson health.


Is Watson a codebase? Or is it just a brand?

It seems to me that Watson is basically just IBM's version of AWS/GCE services (at least the non infra ones). But it gets thrown around as a buzzword so often. The marketing makes it look like there's a single AI codebase that can be accessed through a bunch of APIs, but I would be very surprised if that was actually the case.


It's not the case. A year or two ago some of my friends who work at IBM couldn't tell me themselves what Watson really is. As of today I understand Watson as everything from IBM that can be related to AI, I.e. it includes IoT as well because the data could be used by AI.


Is Watson a codebase? Or is it just a brand?

Both, sort of. There is a "thing" called Watson, which is related to the Watson that played Jeopardy. But "Watson" is also a brand which lumps in stuff that has absolutely nothing to do with the "old" Watson.

To illustrate a bit.. "Watson Health" is (or was) made up of a ton of people and technologies who came into IBM as the result of several acquisitions: Truven, Phytel, Explorys, etc. In many cases, they repackaged stuff from those vendors, gave it a "Watson name" and shipped it. And some of this stuff was literally no more sophisticated than linear regression / logistic regression, etc.


"IBM Watson Marketing Automation" being another I was made painfully aware of recently, which was the result of the Silverpop acquisition at least in part.


Here's an answer from someone who "worked with IBM until earlier last year as Watson architect" ...

https://news.ycombinator.com/item?id=15456211

It's reasonably in depth and appears sincere.

tl;dr: it's a good search engine and a disparate set of machine learning tools. Sales is promising Hollywood AI, but the reality is that it takes a sizable project team to build anything worthwhile.

Scanning past commentary, it seems that startups are eating their lunch (more nimble, dedicated to customer space). I'll add that half the machine learning battle is getting access to data, so hyping the brand makes sense strategically.


Watson seems to be hyped as powering the entire world but I have yet to see a real project using it in any capacity. I haven't even heard a cohesive description of what Watson even is, beyond surmising that it's a suite of AI-like services, although any APIs seem to be hidden within the broken IBM cloud interface, perhaps on purpose.


This sounds like a case where the AI is used in a "marketing" way to make people interested in a product, and then there isn't really much AI involved, and the people developing the AI have a struggle to prove that it's relevant to the business.

I wonder if DeepMind at Google has a similar problem. It is certainly getting a lot of headlines, but there are plenty of other AI groups within Google that do business-relevant things like improve search or ad matching or make Google Home's voice recognition work. I would not be surprised if in the long run DeepMind becomes a group that performed a neat stunt with Go, but kind of fades in practical relevance, like Watson with Jeopardy.


I've always viewed DeepMind as more of a skunk works program and less as a profit driven enterprise. DeepMind exists primarily to push the limits of what can be done when you put group of leading researchers together in a room, provide them with nearly limitless resources, and simply tell them to "go". I expect some of that effort to eventually trickle down into Google's consumer products (maybe a healthcare focused version of AutoML https://cloud.google.com/automl/). Google has already done a lot of work on the HIPPA side of things (https://cloud.google.com/security/compliance/hipaa/)


> This sounds like a case where the AI is used in a "marketing" way to make people interested in a product, and then there isn't really much AI involved, and the people developing the AI have a struggle to prove that it's relevant to the business.

It's pretty common in many industries to have products to showcase your chops while providing zero real world value and zero sales.


Deepmind has at least one example of saving Google millions: https://deepmind.com/blog/deepmind-ai-reduces-google-data-ce...


IBM. Twenty years ago my late father-in-law had his ancient DisplayWriter (1st-gen word processor) break down.

He called his local sales office. Somebody said, we've got a couple in a storeroom someplace. I'll bring one over. No charge.

That was customer service. That's how they built their reputation. Now they seem to be squandering it.

Sounds like they're headed now in a direction where they sell their artificial intelligence as being smarter than their customers. Sounds like they insist on disrupting their customers rather than their competitors. That Doesn't Work.(TM).

Every time somebody does that to a hospital it gets harder for other vendors to sell actually-useful stuff to health care operations. Not good.


IBM Watson is bad for the "AI" community because when non-experts see IBM repeatedly fail they assume the whole field is nonsense. Hopefully IBM will be more cautious in what they claim to be possible. Hype and deceit do not belong in healthcare.


IBM literally does not care about “the AI community”. They will strip-mine the AI hype then move on.


I'm going to play devil's advocate and call it:

Self-driving cars have killed pedestrians, Watson isn't doing all they wanted... is this the dawn of a new winter?


I found out that Tesla's autopilot technology can't even really detect stationary objects. Albeit this is hard technically as it's actually a point of reference physics problem but the writing on the wall is clear: marketing departments are overpromising and no one is really delivering big on AI, not even Google as a lot of other companies have already matched their efforts (Microsoft, Waze, etc.). IBM? Give me a break.

Anyways, may I interest you in cheap VR headset?


I think self-driving cars are possible, and will probably become a reality. However I don't think it will be a generalized AI driving it. I'm thinking more along the lines of a lot of subsystems, one of which is visual object detection with deep learning (the one thing deep learning has shown it beats everything else), all the other subsystems from the sensors up to the driving behavior will have to be Properly Engineered(TM).


But do self driving cars cause less accidents than humans?


Winter is coming. And none too soon. So tired of AI headlines.


It would be really great if we could somehow hold those overselling AI to account.

This may be possible this time round, because we’ll have a very good record of who said what and when via the web.

Without any kind of accountability, history will continue to repeat itself.

How about hyperbole.com, where you can google academic researchers and industrial leaders and pull up quotes from them, dated and fact-checked.

I’m sure you must be able to train a deep net to do this. They can do anything.


Pundits who amplify the current bullish buzzwords will never be punished, which is why they do it.


They should have asked the AI how to save their jobs. Unless of course it's not really an AI, but just marketing hogwash. Surely not!


Does it really qualify as "news" any more when there are layoffs at IBM?


Who makes money by pointing out flaws at IBM? Their stock on NASDAQ is 139.31 as of Jun 25 12:58 PM ET. Is there a large enough short position to be worth buying a news article?


Already priced in and layoffs have an opposite effect.


I’m not too surprised, they come by our shop around once a year wanting to sell Watson analytics, and because ML is a buss word in the political layer (our upper leadership) we politely listen.

It’s not really great. They can automate the process of finding reports, but the truth is, we have people already doing that and all the reports they’ve been able to find in their POCs were either useless or some we already had.

I’m sure ML has potential, but our analytics department is run by economists and political scientists who can really utilize ML and I’m not sure they could be retrained, even if they wanted to, which they don’t.

We can’t really hire ML personal either. The only people who do it well enough are PHDs, and they have much better offers, and we’re at least 5 years away from having a candidate pool of people with the right education mix (Data + political science) and probably 10 years away from figuring out how to utilize them as well as finding enough funds for a position.

So ML is mostly left to independent companies that corporate/consult with muniplicitaties in projected owned and run by the municipalities). IBM simply doesn’t do this, and the consultant companies that doesn’t suck all work with something else.

Of course this is the perspective from the Danish public sector, it might be different else where, but I have friends who work similar positions as mine in banking, and they’re telling me the same thing.


>The only people who do it well enough are PHDs

This is really untrue. Maybe that's part of your hiring issue.


It’s always cute when people say stuff like that. We’ve tried quite a few and none have delivered any sort of useable quality.


Swedish here, I'm convinced things are not that bad across the sound. I've seen people with masters degrees deliver really good ML systems. In my team the person with the highest education is also doing a PHD but the majority of us have masters degrees. But, sure, I've also met other companies in the same business who have multiple PHDs doing ML and they'd made a lot more progress than we had in some areas, but we'd made more progress than them in others (one company in Norway I was especially impressed with). ML development is very different from regular, software development and requires an affinity for math and creativity, sure, and a research background helps out a lot but I would not go so far and say that they're the only ones who are any good.

Have you tried hiring people who have documented experience doing ML commercially? Because I could understand how you get bummed out a lot if you hire recent grads to do ML.


well, to begin with, the heart rate signal on the logo is upside down :)


Look I think everybody needs to cut IBM some slack here. Integrating technology with customer needs is hard, and it takes a while to get good at it, to get a process for reconciling with product managers want with what developers can actually make. Once IBM has been in this technology game for a little while, I'm sure they'll get the hang of it.


People aren't faulting IBM for tackling hard systems-integration problems.

People are faulting IBM for over-promising, under-delivering, using misleading advertising, and internally seeming to have foolish management practices.


given this line

> Once IBM has been in this technology game for a little while, I'm sure they'll get the hang of it.

I'd assume the parent is being sarcastic


...and perhaps a little too subtle. Or maybe just not funny enough to be apparent.


Nah, it was funny, just a little too subtle for me at the time.


Haha... now IBM needs to find some nice wording in order to impress clients to continue paying for smoke and mirrors believing in promises that "one day it will be so good". There is no measurable substrate to any of promises and whoever still believes in it, just hides its personal failure to see that on time. Some big clients will continue following "all bets in" method as they are already too exposed to IBM and to admit that they were wrong and naïve as children would mean also losing their nice paid jobs. I am happy that after everything unveils with IBM, their companies will feel a full blow and their investors will see how it looks when you are not keeping your management on a short leash.


I know very little about AI, and even less about management. But phytel's story as told in TFA is depressing af. I wonder why nobody is held responsible for this catastrophic failure. Why isn't the head of Watson health out on the streets looking for another job?


Anyone know what "RA" stand for? The employees kept mentioning it on the "Watching IBM" blog. I couldn't find a decent match via Google. For ex:

> IBM Watson Health has initiated a significant RA across multiple offices.


RA = Resource Action. Add it as the next entry on a long list of euphemisms for layoffs.


"layoff" being a euphemism for "mass firing of people", that has been around so long it doesn't even seem like a euphemism any more. As you said, it's a long list.


Resource Action. That's just euphemistic IBM jargon for layoffs.


Is it possible to dehumanize the firing process any further than the words, "Resource Action"?


Resource Attrition


They can't just build a system, and leave it for the customers to do all the hard work of integrating various systems. Integration is the key and should be seen as product in itself.

If IBM stops milking the cow like the services, asking the customers to integrate, It will die a miserable death, and unfortunately a premature one.

It should be pluggable, like set of pick and choose building blocks of interfaces, that only should take domain expertise and custom specifications as input from the client (Which ever domain.) Until then AI only will become ambitious Sunk cost.


I think their strategy of collectively branding every single thing as Watson is actually pretty risky. Branding is powerful but it cuts both ways ... it will only take one high profile failure for that to ruin the brand across every product that has slavishly adopted it. Even here you see it happening - layoffs in one part that got labeled as "Watson" are immediately seen as indicative of a much bigger problem with Watson. That would not have happened if they were independently branded.


Yeah their branding of Watson is pretty bad since no one knows what Watson is anymore. It used to be the thing that played Jeopardy. Ok fine, we can generalize that to Watson being an NLP driven infromation retrieval system. But then I got pitched recently as Watson being an Nvidia DIGITS style system on the cloud. It was only then that I realized Watson is just a brand name for anything AI remotely AI related from IBM.


I'm not surprised. My company seriously considered Watson when we were looking at AI platforms. Their sales team is pretentious, and when you scratch the surface their offering isn't all that different from Google.

Undoubtedly IBM has done amazing work popularizing AI with Jeopardy and the debate bot. However, I think that halo extended to its competitors that, frankly, offer more things developers really need.


I was happily surprised the other day when the IBM debater was presented and there was no mention of it being Watson anywhere on their website. They really need to stop with this personification.



The problems with Watson does not mean that this is a problem with AI. If history is any guide IBM fails but eventually succeeds during most technological revolutions (PC, ecommerce etc)


Can you restate that more precisely?

In particular, I'm not sure what sets comprise the numerator and denominator of "most technological revolutions".


I think I saw that one of Watson’s engineers is now working on a crypto project. SingularDTV? Heh, shows how much confidence was left in the program.


Every Watson ad I see on TV freaks me the hell out. Seeing primetime television commercials for business AIs feels too uncomfortably cyberpunk for me


IBMs Problem with AI is that they don’t have any AI


Pay no attention to the man behind the curtain!


Next month will be a good time for a data-rich medical system to offer terms to IBM.


IBM has become Unisys, they just don't know it.


Yeeeeep


What is wrong with our society? We need AI to tell us how to be healthy? Is this real? Sometimes I feel like I live in the twilight zone.

PSA for anyone with a disease that man and corporate picked a name for - you are feeding yourself towards that disease. Fix your diet (and 100% rid yourself of animal products, processed sugar, wheat, and any "food" that comes in a package with a label on it - hell stay away from anything that isn't a fruit or a vegetable), go on a dry / water fast, and heal yourself. Stop relying on doctors, AI, whatever the hell they produce supposedly for you.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: