I think there's a major misunderstanding of Watson which isn't helped by IBM's Marketing efforts. IBM Marketing has been slapping the "Cognitive" label on everything and is creating unrealistic expectations.
The Jeopardy playing Watson (DeepQA pipeline) was a landmark success at Information Retrieval, its architecture is built largely on Apache UIMA and Lucene with proprietary code for scaling out (performance) and filtering & ranking. I'm not an expert on IR so I won't comment further. This is very different from Neural Nets that are all the rage in ML today.
I'd like to point the following links from David Ferrucci  the original architect of Watson and this technical publication at aaai.org .
The DeepQA pipeline wasn't fluff, the intention was to take this question-answer pipeline and apply it to other verticals such as Law and Medicine, essentially replace the Jeopardy playing Watson's corpus of Wikipedia, Britannica etc... with Legal and Medical equivalents.
Given its runaway PR success, the Watson brand was applied to many other areas which haven't been successful but I'd like to point out what the original product was here.
The original Watson work was very good, and adding neural-network based models as alternative question parsers and answer rankers is a clear way forward.
It's a pity IBM decided to use the Watson name for everthing they do in ML now.
you got cause effect wrong, since it's a glorified search engine, the marketing team did all they could do pass this off as intelligence and to muddle the water around what Watson could and couldn't do. when sales target still were falling short, they started dumping everything under the Watson budget to prop up the dept.
I wonder what the solution is? Stronger brand owners within the company who can defend and be the quality control of what gets associated? Or is it hopeless?
The question I would like to ask is that if the architecture is built on UIMA and Lucence, what exactly did "Watson" do?
"Watson's" secret sauce was the filtering and ranking, you get multiple results from the models, which ones do you pick?
There was significant engineering effort put into reducing the latency of getting answers out of the pipeline as well, you had to optimize given the constraints of latency, accuracy and confidence from the pipeline.
The main trick behind Watson is to take the various systems (parsers, search, et al.) and hacks (constraints imposed by the rules of jeopardy) needed by a jeopardy playing bot and put them all together.
So, in some sense, you could say UIMA is what Watson did -- because it allowed a lot of flexibility for researchers to combine their efforts. Ranking and filtering becomes of ultimate importance in a system like that because at some point you have to make a decision. However, it is terribly reliant on the other modules at least getting somewhere in the ballpark -- and the ranking is also not, by itself, anything impressive.
So, it's an interesting case of how far you can get by just setting a single goal and slamming everything together -- but as it turns out, for every new domain you wish to apply something like that to, that magic ballpark is hard to reach without a significant amount of engineering & research effort to come up with new systems/hacks combined with a lot of relevant data. In other words, just like any other adhoc AIish system with a particular goal. Change the goal, change the system.
So, of course Watson was oversold, it was a PR and Sales effort from the beginning. Sort of like AlphaGo or DeepBlue -- you might be able to find one or two interesting ideas in the bowels of such a system -- but the system itself is not a generic one.
(locked-down IEEE versions: http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=617771...)
The challenge, as I saw it, was that no matter how good the tools and products that were used to help companies with data analysis to improve their operations were, when they realize they can't talk to a cube and joke with it about misusing colloquial phrases their disappointment overshadows all the 'good' stuff it was doing for them.
No relationship works well if it starts with a lie and as this article shows, people do take those ads at face value and assume there really is a talking AI inside of IBM. Then they are hugely disappointed when they find out it doesn't exist.
People want to believe. They'll pay for their belief. Then when projects start to fail (~5 years after commercialization) they form the opinion that it was never going to work and they'll never do it again.
... then we wait another decade, repeat.
Anyway, The Turing Institute existed because of Don's drive to commercialise Expert Systems. Don was a driver in the Alvey project which was a grand project to solve issues around Expert Systems in response to the Japanese Fifth Generation project. A man called Lighthill then rumbled the Expert System movement. If you want to watch a demolition of it see the following video :
and the other 5 of them (you will find them in the sidebar!)
Fun fact. In video 6 you can watch Christopher Strachey have a pop at AI. Christopher didn't work at Bletchley, he was too young, but he did work at Manchester with Alan Turing and they created computer music and love poetry in collaboration before Turing killed himself. Strachey went on to be the first Professor of Computer Science at Oxford, he was a scion of a great family of artists and poets.
But enough of this. The Turing Institute was an attempt to say "we are right, look", but in fact it failed. Don was clear why, when you hit 40k rules knowledge bases became completely intractable. It was impossible to make improvements or changes. This was christened the Knowledge Acquisition bottleneck, and Don and others turned to Machine Learning (he liked to call it Behavioral Cloning) to solve it.
1. It was done to death by peer review.
2. It died commercially.
3. It's main protagonists decided to do something diffferent.
Wow, what a rollercoaster of a paragraph! Thanks for the story. :)
> ... died May 23 in a two-car accident on the New Jersey Turnpike. He was 86. His wife, Alicia, who was 82, also died.
Donald Michie did work (with Turing!) at Bletchley Park, and he did die in a motor vehicle accident. He was 83 when he died (not 91). His ex-wife (they had remained friends) was in the car with him and also died. She had never defected to Russia, but she was (or had been) a member of the communist party.
Would you mind going into detail? I looked up three news reports and his wikipedia page and none mention anything beyond a car crash, and searching "Don Mitchie russia" and "Anne McLaren russia" gives only this HN thread as a relevant result.
It sounds like you were close with them, so if there's anything more to that story then it would be interesting to hear it.
There is a detailed year by year description of her work and I can't see how she could have "defected to Russia" except one note "...she was a member of the Communist Party of Great Britain..."
A) Most of the documentation is on the other side of the digital event horizon (so pre-1995).
B) I break apart knowledge about AI (that is, the mathematical and systems discoveries) and AI as a product. The assertion is not that the knowledge failed, but that the product failed.
As you pointed out, the math behind expert systems was solid and functional for certain problems. But the collapse of the product (aided by the popular science press researcher-hunting for whoever would promise the most outlandish things for them) broke the funding channel for anything labelled AI.
As a result, methods used by expert systems lived on and solved problems, but nobody could call it AI because conventional wisdom knew that "AI was a failure."
I actually wonder if we're swinging too far in the opposite direction of data vs. "understanding" the world. It may turn out that we can make some things pretty good using ML but not the last 5% needed to make them truly usable.
Interesting hypothesis, care to expand your thoughts?
Are you saying something along the lines that our current ML techniques builds an initial raw decision with statistically-based reasoning, that usually works 95% of the time (Metzinger's "sentience"), but to solve the remaining 5% of edge cases, we need to build more reasoning-style decision-making (Metzinger's "intelligence")? Metzinger is covered by Peter Watts , jump to the "Sentience/Intelligence" section.
It was founded by David Ogilvy.
One of his big things was "nothing kills a bad product faster than great advertising"... he would be so ashamed of these ads.
I can only imagine the look on marketing's face when they heard that one.
Indeed the whole thing looks like a database with basic AI as a sales argument...
[0 - in french] http://www.silicon.fr/credit-mutuel-non-ia-watson-magique-17...
Even based on the technical marketing claims the consultants were coming in with, it was a product less capable than the IBM Prolog-based expert systems that I built out 15-20 years ago to triage and correlate network events in a large WAN infrastructure.... and that was a product with a whole slew of major implementation and operational problems!
That sounds like IBM Tivoli Enterprise Console (TEC). They wrapped a big Prolog-based DSL that the users wrote their correlation rules in, around the underlying Prolog engine. You could dip into the raw Prolog if you wanted, but it was considered an advanced user technique. The vast majority of "advanced" users used the set of cookbook Prolog functions the DSL was represented as, and even more users only ever touched a GUI that represented a tiny subset of that cookbook. People figured out what parts of Prolog the Prolog engine actually supported mostly through trial and error at first, before IBM finally published more details about it.
This rules-based approach ran into the same challenges of expert systems: beyond a certain threshhold number of rules, reasoning about the rules logic became dramatically more difficult. It didn't help that IBM never supplied support tooling for the underlying Prolog engine to aid in performance profiling, debugging, build support, etc.
The lesson I drew from that experience is if you supply your users with an embedded language in your app, make sure the advanced users get access to the kind of tooling full developers want.
It is interesting to see that the Watson-Jeopardy system uses Prolog . I have to wonder how much of that was re-purposed to the chatbot project you observed. If it was, then its disappointment might be predictable; the range of possible problem spaces in a support setting is going to be larger than the relatively more constrained NLP of the Jeopardy format.
I've yet to see a good NLP system oriented towards technical software support, backed by a support knowledge base, that functions substantially better than current text searching and result ranking technologies, so I'd be very interested to hear about other people's positive results with applying machine learning to this area.
We started with a really promising system that would let us call out to compiled python code and pull in some more capability. But by that time, we finished the original project and moved on.
Typically people are deploying chatbots as a cost cutting measure to reduce labor. Sometimes that means using the chatbot as a fancy IVR, getting customers to abandon the transaction or any of several other paths. Actually answering the questions doesn't always matter!
The only good thing to come out of IBM in years is their Hyperscan regex library and unsurprisingly they don't market it at all or build practical applications with it
Conventional wisdom is that, in a free market, a single corporation will grow to rule everything. But that never seems to pan out.
Part of the reason is that the larger an organization gets, the more inefficient and bureaucratic it gets, the less it is able to adapt. See "The Innovator's Dilemma", a book everyone interested in business should read.
The economic calculation problem, described my Mises in his criticism of centrally planned state economies, starts rearing its head in large firms as well , leading to the inefficiencies you mentioned.
I've concluded that the leadership of IBM is what blows, and the C level offices should be gutted and replaced with whatever good engineers remain.
I've seen leadership make or break every organization I've been a part of
The issue with MS isn't whether or not they can build a quality mobile OS (they can), but rather it's whether or not MS can build an ecosystem with enough buy-in from companies to build apps and services.
I'm sad to see MS do so poorly with their mobile OS, considering how good it is.
I think there might be a titanic fight shaping up in the economy to build out and secure ecosystems in a more Net-centric future. The WalMart diktats to truckers to not drive Amazon loads are just opening salvos in this war, for example.
I see healthy, vibrant ecosystems with a bidirectional value exchange as a way for very large companies to mitigate much of the damage inflicted by their own size. Unfortunately, most large companies are trying their damnedest to build out ecosystems as command and control bot networks, and see it only as a profit source to unidirectionally squeeze. That's tempting because it's so easy, but it means your command and control decisions must be better and more timely than the ecosystem's wisdom of crowds, or you'll suffer the consequences.
But instead they wanted a piece of the consumer app market so badly that they rebooted their platform 3-4 times chasing an impossible dream.
A phone that works great with corporate networks, allows for editing Word and Excel documents, could have gone very far a few years ago. Even getting corporate app developers to port their apps to Windows phone would have been easier than chasing Zynga down.
This article seems to run counter to your intuition:
> In the fourth quarter of 2016, more than 432 million smartphones were sold, according to a report published on Wednesday by the research firm Gartner. Of those, just 207,900 were BlackBerry devices running its own operating system.
> That gives the Canadian smartphone company a share of the overall phone market of less than a single percentage point. To be precise, it's 0.0481%.
> Even when you include BlackBerry's devices running Android, its numbers are still incredibly low. Last year, CEO John Chen said the company sold only about 400,000 devices in the second quarter.
Blackberry's market share is much nearer my own (i.e. 0 phones manufactured or sold) than it is Apple's or Google's.
The list of Dow components is a particularly compelling study. It firms up in the 1920s and 1930s, and is relatively stable through the 1950s, then starts turning over at an accelerating pace.
See also Deloitte's "Shift Index". A Forbes writer has covered this for years. A long-term secular decline dating to the mid-1960s. Measuring return on invested capital.
I'll second that - both Azure and Surface are best-in-class.
> Azure is an absolute joy to use and administer compared to IBM Bluemix.
Note to other readers: although slightly tangential, these two comments do not actually disagree or contradict each other.
I spent some time recently diving into Azure ML Studio. It's a rough product still, with a lot of opportunity for optimizations/better UX.
I reckon the administration is much nicer, and probably closer to the designers/architects' UX understanding.
See "Titan" by Chernow.
The extreme scenario doesn't pan out - we don't end up with one eternal corporation that owns all capital. But certainly it pans out in a more realistic scale of size and time; there are and have been plenty of monopolies in many industries, and the overwhelming power and influence of large corporations, and the damage that power and influence does, is well documented.
Which is a symptom of publicly traded companies, right? It creates this weird system where finding a niche and serving it reliably year in and year out is considered failure; you're only succeeding if you're constantly growing, which means you inevitably collapse under your own weight.
Just today I had a 10-minute chat with one of my colleagues from marketing about AliExpress. I live and work in Eastern Europe, and almost all the people who do online sales rely on a Chinese company like AliExpress. Some of those people were using the UK as their source of buying merchandise until a couple of years ago, not anymore, it's all China now. The US (and the UK&Ireland) are losing big on this because Amazon's stock-price is blinding them. IMHO Amazon is going to hit a wall in terms of online sales sooner rather than later and the only thing that will be left to save them will be AWS.
IBM's management believed shareholder value was important, in about the same way Bernie Madoff believed his investor's well-being was important. Which is to say, not at all in actuality.
IBM's management was negligent in regards to shareholder value in almost every way possible.
Believing shareholder value was so important, would require simultaneously believing having good engineers is critical.
The management decided to fleece the company for their own near-term benefit, to the detriment of shareholder value (which is now being represented in the stock price accordingly, and it'll get much worse, soon).
IBM Real(1) annual Revenue (in billions of 2016 dollars):
(1) adjusted by BLS CPI Calculator
Project homepage suggests this is an Intel project.
> Hyperscan is licensed under the BSD License.
> Copyright (c) 2015, Intel Corporation
I appreciate their perseverance if nothing else. If you look at the fate of companies like Nokia, IBM has some how managed to avoid this what has been meteoric shifts in their business.
Plus competition is always good.
Watson itself clearly was a great product and a big step in the development of AI. Just because the marketing department is making a mess of things shouldn't take away from the work that was done on Watson in the first place.
The idea that IBM Watson is some uniform AI in a box with a bunch of REST API's to "expose" its intelligence seems to be the sales pitch. It's not. It's just a bunch of acquired products (you can see this when e.g. Watson Knowledge Studio breaks and you see the Python scripts that glues everything together in the backend) that are poorly integrated, probably because the left hand has no idea what the right hand is doing.
The other day I saw a TV ad for Watson for the first time: the ad showed a bunch of aeronautical engineers who instead of having to take all kinds of measurements of airflow around a hull or who knows what else (the TV was muted) now sat behind a single computer that would do the whole thing for them. It was such bullshit, I couldn't believe my eyes.
Imagine Amazon promoting SES as this amazing tool that'd automatically send marketing emails and create a marketing funnel and drive user acquisition cost down by a factor of 100X, without you having to do anything. That's the level of vaporware they are selling.
Manager was expecting that after a week we'd have something akin to Googles "hello google!" at hand, through Watson. It took a couple of coworkers to come up with even more impressive broken examples to finally convince him that the technology was bad.
I wonder how often that happens.
EDIT: I have to say, I bet this happens quite often in fact, especially with software.
At first I was annoyed by your comment, but then I realized how true it is and sulked instead.
The good consultants are leaving as soon as they hit the end of their first promotion cycle. The duds stick around and assume the upper-level manager roles. The group is in bad shape now and poised to get worse in the coming years.
Services Profit = Bid - Employee Cost + Upsales
Bids are only so flexible. Employee cost is much easier to minimize. So inevitably, all consultancies trend towards providing the cheapest employees that will satisfy the customer. And if the customer isn't constantly vigilant, the quality can get pretty dire indeed (BAs who just learned to "code" yesterday).
Depending who you get this could be an upgrade. I've met some damn good Indian programmers who work for contracting shops (and, sadly make way less than they would in the states). But, I've also met some who literally don't know how to code.
Fundamentally, the problem is that these firms are used for work that is not valued by either the customer or the implementor. The primary concern is cost. If you are a developer for a tech company working on core products, you drive revenue and are important. If a feature you implement is marginally better than the competition, your firm can win the market and, due to economies of scale, the revenue gain will be huge. Because revenue is sensitive to small differences in quality, you want the best possible developers that you can afford.
As a result, developers viewed as part of the firm's comparative advantage are paid well and treated well. But if you are a DBA working at walgreens, then you are not a part of the comparative advantage of the firm. If you are marginally better at your job, revenues will not go up. You are a cost that will be outsourced in a heartbeat if the firm could save a dollar. Or like doing landscaping at Google. Regardless of how awesome Google is, they are going to look at their landscaping expenditures purely as a cost. Revenues will not go up if you do a better job. Thus they don't care about getting the best possible landscaping, they want adequate landscaping, like water running out of a tap.
That's the type of work that is passed on to these outsourcers. It may be technical in nature, but there is little to no perceived marginal gain in the work being of a higher quality than the minimum amount necessary to be fit for use: maintenance of products that companies no longer want to invest in, but they have service contracts in place that require maintenance. The goal is to minimize cost while not breaching the contract. Or bespoke work in the B2B space where you don't get the type of economies of scale that result in a meaningful revenue bump when one particular piece of code exceeds expectations. In all these things, the driver is the cheapest talent such that the output is adequate.
That means that these jobs are not rewarding to the developers who do them, and so these firms have huge problems with turnover and keeping good developers motivated. Those developers who remain are the ones who don't mind working on code that no one cares about, being paid poorly to do it, and being treated as expendable. In other words, the least motivated, passionate developers are the ones who remain. It is no wonder that they will tend to be the least talented.
Truth. I've noticed many contractors take specs _very_ literally. While that might be good in that the result won't be _wrong_, the product usually becomes quite brittle and can't "scale" well.
BPO / offshoring is a whole other can of rotten worms.
This is sometimes referred to as the 'Dead Sea Effect'.
The same way that water carries salts and other minerals to the Dead Sea, and then the water evaporates while the salts accumulate.
Similar effect with hiring a mix of good and bad people, and over the time the good ones leave because they have options while the bad ones stay and accumulate.
There's so much I could say about the crap I found (and if someone asks I'll be happy to!), but I can summarize it by saying that it was bad even compared to the what I've come to expect from WP sites that are plugin- and WYSIWYG-layout-builder-laden, to the point of being utterly unmanageable from the WP backend interface, even on a local setup, because just exploring the ridiculous custom interface pages take ages to load.
I truly cannot imagine this code being written by anyone but a lowly, unsupervised intern at this company. And yet, based on my experience, it was probably just some 'regular' employee, unnoticed by his manager, and his manager, and the client none the wiser because how should he be able to assess competence.
That being said, if they weren't trying to eke every microsecond of performance out of the system, then that's pretty poor design, maintainability wise.
She wasn't trying to eke every microsecond. There were some smart guys on our team, and we were all afraid of that code! The programmer who wrote it would fire back about her Mathematics PhD if you ever tried to bring up Object Oriented design. When I was there, she would sit in the cafe downstairs eating a sandwich, until something broke. She wrote herself fantastic job security!
After two full weeks of him quietly 'working' at his new MacBook, I got a bit suspicious about the fact that he'd not asked for help once. So decided to check up on him and ask him how he's settling in with the codebase and if he could use any help.
He opened up his editor to show me something he was struggling with. I gave some suggestions (look at this code here, do something similar).
After some back and forth it became clear that:
1. he was entirely unfamiliar with Ruby On Rails
2. he had no idea how to use a Mac
3. he had not bothered to learn how to use a mac while sitting at his desk for eight hours a day for two full weeks
4. his approach to copying some existing code and then changing it for the new use case involved mostly clicking through the Edit menu. He was less familiar with keyboard shortcuts than my mom and her mom. CMD-C was not in his vocabulary.
He was let go shortly after, but of course he did get a full month's pay, an amount that would comfortably support the frugal freelancer for at least another three months.
Honestly, I'm mostly thankful for this situation. It cured a decent amount of my 'impostor syndrome'. It also left me wondering how these kinds of things happen.
This company was one that lots of good developers would want to work for, in one of the most desirable cities in the world, and they were constantly in need of good developers. I was baffled by the mismatch between what the market offered (plenty of good developers, more than most cities) and their constant need for developers, to the point of hiring this guy and not realizing that he didn't know the basic copy and paste keyboard shortcuts until two weeks in.
When I got into the group, I was asked to do stuff I had never done. Okay, I start to learn it, but slowly.
Then I was reorged into a different group, doing stuff I had never done. Okay, I start to learn it, but slowly.
Then I was reorged into a different group, now I'm a Developer, when I've never programmed professionally before.
I am far far worse than my colleagues at even simple tasks. It's not that I'm not trying, I'm working tons to try and learn data structures, and programming to be able to complete my job.
But it would be entirely valid for my coworkers to feel the same way about me, as you describe about this guy.
It's likely going to take another 4-6 months before I'm competent at my job. Will my colleagues still care at that point? Will I be reorged into another position by then? Who knows?
And I'm an expert at many fields, just none of the ones I'm being asked to do.
I'd leave, except my colleagues are great, the topics are great, and if someone is going to pay me to learn how to be a proper developer, I'd be stupid not to do this.
But it's hard as fuck, and I'm certainly not pulling my weight.
I do think that he might've had a chance (perhaps downgraded to a junior position) if he'd shown willingness to learn though. The fact that he just sat there for two weeks pretending to be busy was the primary reason he got let go.
Hell, I spent time stuck on a problem because I had never used code folding before, and couldn't find the import statements I needed to edit.
Would you agree that not understanding code folding is a bit much?
I'm not defending the particular individual, or the situation, only giving some context to my comment.
Not knowing the copy and paste keyboard commands is a completely different realm of incompetence!
But you're right that lots of developers don't know many keyboard commands. I was being a bit too broad. Personally I can't imagine how any programmer would not bother learning keyboard shortcuts (or even better vim keybindings), but I've met plenty who didn't.
That being said, while using the 'edit' menu to copy/paste is clumsy, I don't see that it would actually have a significant effect on someone's productivity. So I would not go so far as to say that someone was incompetent just because they did that.
The ones that do stay in that environment either have golden handcuffs or are goddamned patriots and love the challenge. I like to think many are the latter as government is an amazing place to do impactful work that is too important to screw up.
Update: I’m not saying this is a good thing, just an industry observation.
It's somehow really counter-intuitive and uncomfortable to challenge someone's honesty openly, maybe?
For example, I can be relatively rude and direct (without meaning to be about half the time), but when someone tells me a story that strikes me as unlikely or heavily embellished, I feel really uncomfortable challenging the story, because it's tantamount to saying "you're lying".
In the situations where I was 'conned', in hindsight I could have probably found ways to test the person involved surreptitiously. It just didn't enter my mind to do so. But openly challenging them just felt wrong to me, and it still does, because if you're wrong you just accused someone of lying.
So these days I make a game out of finding ways to surreptitiously test the people around me. I often forget how useful that approach is (trust, but test), and I guess your comment reminded me to apply it more.
Whether its a good plan to challenge someone's honesty actually has nothing to do with whether or not that person is lying, even if I have certainty and evidence. The question is: what is the best thing I can do for my life? If my accountant lies to me, I will call the police. But if someone turns out to not be skilled as claimed? It depends on context.
It doesn't advance my life to tear down someone else; the best thing I can do is try to build my own understanding and take action. My approach is to be curious, actively listen, and ask clarifying questions. This can be as simple as saying "tell me more about that." Once I have enough information and context, I can judge claims and ideas directly.
It's not about tearing down someone else, it's about assessing the validity of someone's statements.
I might have presented it as more suspicious/antagonistic than I meant. My actual approach is pretty much as you describe in your last paragraph.
Basically, my approach to 'new humans' is the biblical adage: "be as shrewd as snakes and as innocent as doves", because I do believe, as in the biblical context, that good people are sheep among wolves. I try to be a good sheep, but I know for a fact that I can't handle wolves without being clever.
So far it works pretty well for me, but I've found it difficult sometimes to explain how I try to be both a sheep but shrewd, calculation but not shrewd or manipulative.
That said I’ve witnessed plenty of cringeworthy moments watching members of the MIT community deal with guest speakers during Q&A.
In corporate environments that I've experienced, the otherwise very argumentative developers would never actually challenge Employee No. 9610 in their basic skills. There's a formal distance in every interaction, pros and cons.
But in the few startup environments (<15 people) the culture was much closer, and there's no way that they would not have been 'caught' very early on, probably even in the first interview.
Even the nicest manager and teammates in the world don't have to proactively call someone out on being incompetent if every sprint their tickets are going unfinished, etc. And with the tooling, it's an automatic paper trail.
I think everyone is going to get a lot quieter, as you don't know who's listening.
You can’t realistically look at engineering culture and say wow here’s a sympathetic bunch who stay quiet when they see something wrong. Just read what industry leaders Richard Stallmen or Linus Torvalds write.
Exactly. Between my freelancing and full-time experience, I've been involved with dozens of teams. Very few of them react well when you question their abilities, no matter how fair and justified the question.
However he's no longer at IBM...
The brilliance of the team... priceless.
The cost of everything, the value of nothing : this is what will be written on the tombstone of capitalism.
disclosure: I haven't read the article but wanted to share a related story.
In recent years as news articles heralding the future of Watson for various industries (including healthcare and supply chain), I predicted a similar path. An amazing product in a very narrow environment designed specifically for marketing and selling purposes, and not very adaptable.
FTA: “And everybody’s very happy to claim to work with Watson,” Perlich said. “So I think right now Watson is monetizing primarily on the brand perception.”
This is painfully obvious, as this has been IBM for a very long time.
"Such "autonomous vehicles" will be a reality for "ordinary people" in less than five years, Google co-founder Sergey Brin said Tuesday."
How's that working out for you Sergey?
Self driving tech hasn't moved appreciably since the 90s. Computers are better. More people are trying it. Reality is, it's just collision avoidance with GPS. They rely on maps; what happens when the maps change? What happens when a kid runs out in front of one of these things?
This is basically what all driving is, no? Go on the road, don't hit other cars, obstacles or people and navigate to a destination.
Computer programming is just typing words!!!
I mean, the fact that they've been using funds from advertising to research this problem is admirable in some sense, but they are actually taking someone's money to do this.
Hype from marketing departments on technical matters is not to be trusted.
Until last week I was on a 6 month contract as a senior DevOps engineer for IBM/Watson. I was responsible for one of the huge real-time data ingestion pipelines that Watson receives. I left to work elsewhere in spite of being offered an excellent position. (If you guys are reading this, hi.)
I went to IBM not expecting much more than working as a cog in a lumbering giant.
Watson is the fastest growing part of IBM. If IBM has all of those eggs in one basket, it is the Watson basket. There were lots of jokes about cognitive in the office pool.
That said, it was by far one of the best managed companies I've seen. They have some fantastic data engineers and scientists. They are backing most of the open source projects related to AI and next generation tech. Spark, VoltDB...
The ads might seem sensational, but the concept of a black box that orders preemptive maintenance for an elevator isn't far fetched...
More over, Watson had so many current customers because it is valuable. The technical advisors that but products don't put faith in ads any more than we do.
IBM would be ridin' high!
BTW. This is still on the table for IBM if they are prepared to invest 18mths and focus gung ho on Oracle. In 18mths it won't be as "other players" will have moved, like Sauron, to hit this market.
Watson is a case study in this, but I know Google has big plans for applying the tech behind Alphago in medicine. I wish them every success, but I'm concerned they will hit similar specialisation issues.
The Manhattan project is often given as an example, but that project was based on solid theoretical models and repeatable experiments. They knew exactly what they were doing and how they were going to do it.
The problem in AI at the moment is mainly developing talent and expertise, not capital funding. I'm all for advanced research into technologies. However we need to advance that research on as many different fronts as possible in a flexible way simply because we don't know how we are going to solve the problem of general AI. Guessing on an approach and spending billions on it would be a complete waste. We're simply not at a point where that would do any good.
I think that there's an business case issue here; MBA's see this as the mega win, the research staff can't say "it won't work" (because you are immediately flagged as an obstacle to progress and the execs either cease to listen because they don't want to hear, or you get bullied with an avalanche of attacks on your team or sacked).
Sadly I think it is being used wrong...
IBM is focusing on using Watson to cure very specific diseases, like certain types of cancer.
I think a far better use for Watson would be to do initial diagnosis, for example my life got massively delayed because I got hypothyroidism as teenager, but only using internet data I could self-diagnose and self-treat (because doctors are still unwilling to help, not trusting data, and before someone come berate me for self-treatment, it is working...) as adult I could finally get my life 'started' (hypothyroidism affect physical and mental development, and slows down metabolism and the brain)
During my quest I met many, many, many people on internet, that had self-diagnosed with something using the internet as a tool. All of us would have been diagnosed properly if Watson was being used on the doctors office, using its data crunching capabilities and symptoms as input to find out what problem we had. (in my case: I have Hashimoto's disease)
No, unfortunately, I don't think there's any basis to believe that you could have been diagnosed properly by "Watson". Maybe by better data management in medicine. Not by an IBM brand name.
From the article:
> “Why confuse people and make them think it’s going to find something that a physician couldn’t possibly find?... Then you’ve moved into what strikes me as unethical territory when you’re potentially giving hope to people who should never have placed hope in that kind of a system because it’s not a magical box that does that stuff.”
Think about the human condition, think about human biology. Be clear- these are systems beyond current science... and doctors deal with them every day!
That's an odd question. People in tech are always saying "teach everyone to code!!". There's your why...
They are the only company that charges you to sample their API's. They are the absolute worst, an infection that needs to be cured.
Seems like a good long-term investment, in an era where people are constantly criticizing CEOs for short-term cash-grabs. Why shackle yourself to a vendor if you plan to be around for awhile.
I mean all the datasets, dozens of libraries, stunning NN demos and training sets, TPUs (multiple versions at that!) all could've come out of the company.
Think if keras and tensor flow were from IBM. Or all those cars now running Nvidia Jetson, or mega datacenters running NV100s or Google TPUs.
Shoot they even had a chance to enhance PowerPC ICs for NNs.
Alas but nope.
I'm pretty sure he thought Watson was a person.
And that the over hyping of it today will negatively impact its adoption when ready for general use?
It will happen again. The use of the phrase "artificial intelligence" itself ought to be forbidden until such a thing actually exists in the corporeal world.
maybe an 'AI recession' is plausible but a large scale loss of interest is unlikely given that AI / ML / deep learning (pick your favourite word) is already too embedded in commercial application.
Like always the hype in the field is strong, but this time we actually have the data growth we need to produce palpable results (and already have).
Imagine analyzing product reviews to determine if it was positive or negative.
Type "I like it", and see the inaccurate targeted sentiment (neutral sentiment instead of positive).
If you change your example from 'I like it' to 'I like puppies', the 'targeted sentiment' changes from neutral to 'puppies positive 0.550431'.
While the first example you found does seem odd, the rest seems to work quite well. Maybe it just has problems attaching 'it' to a 'target'.
I don't seem to hate it.
I don't seem to like it.
Edit: In fact it can even be fooled just by increasing the linear distance between a positive term and negation. The following is rated +0.6!
I don't in any way shape or form like it.
Edit: My suspicion was confirmed when I tried "I like it a lot" had a positive sentiment with 0.711.
Quite a few seemingly similar AI/natural language/etc problems are actually very separate.
Personally I'd be happy to see the paragraphs/minutes at the beginning of far too many interviews about "intelligent" machines exchanged, from straightening out the misconception that Watson is an example of this new hot "Deep Learning" thing and one of the pinnacles of achievement in the field, for some type of more valuable type of commentary from leading researchers.
If you think about it, even with 90% accuracy they are worth it as they could almost automate some cognitively-difficult tasks and for those instances that they can't do it properly, a human can intervene. So instead of 10 humans watching over something you end up with a single one. It would take some time until all we have in the research is in production and that will change a lot of things. After that there might be another AI winter (who knows if we can do general intelligence?), but with current pipeline it might take a while to reach plateau.
I've still got great shiny magazines from that time, though.
I had researched into Watson when it had won Jeopardy, as much as I could publicly find, and concluded at that time itself that it won't stand up to the hype IBM is creating.
Some of the cognitive services they are offering today are not half bad; also I can say their salespeople are doing a gangbusters job in places.