I work in the public sector of Denmark, and we’re targeted by a lot of the hype. Which is worrisome, because it might actually lead to stupid projects if it becomes a political focus. So far it hasn’t though, and blockchain never did, so who knows.
The thing about ML is that all the BI is worse than what we are already doing. Because it’s hyped we’ve naturally done proofs of concepts, with universities and with big tech, and no one has been capable of providing ML based analytics or BI that is even remotely close to what we already do. Because the simple truth is that we have been working with data for four decades. We have full time analysts who do nothing else, and they are simply lightyears ahead of anything ML we’ve seen, and, they can actually explain their results to our politicians and decision makers.
Where ML does work, and the fact that it does separates it from blockchain, is for recognition. We have a lot of data, often in poor quality, and ML can troll through it faster and with higher quality than our human workers. We had to go through every casefile in a specific area, and identify which ones were missing a specific form. A casefile can be 500 A4 pages long, sometimes scanned in really terrible quality, and we had 500.000 of them. It took 12 people 6 months to do so, simultaneously we did a ML poc. It took 3 months to train the algorithm, but only one employee, and once it was done being trained, it took around five hours with a lot of Azure iron to troll through the data. ML had better results than our human effort and it was obviously way cheaper.
So ML and AI may be a lot of hype, but it’s also more than that.
This is an awesome opportunity to offload busy work & improve your overall "product". What if, for example, these lawyers and paralegals could now spend more time with their clients for less money, or handle more clients, making top quality legal services more affordable for more people? That would be an awesome scenario to see play out, and possibly a major help to those who never had access to legal help.
AI and ML are just two things that are comming for that subset. Digitalization is also if not a bigger then at least a huge factor.
The entire ecosystem of companies and people who were involved in supporting the music industry were more or less wiped out over the last 20 years. Leaving a thin layer of really successful people and then a huge group of people who makes no money.
And the more things can be digitalized the more they are subject to pattern analysis which means the subsets of your that make you valuable are also easier to replace.
Ironically a cleaning lady is probably the last person to loos her job because it's really hard to replace, but that just means the supply will increase too.
So sure humans are better at BI but that's based on the assumption that how we do BI is the only way to do BI. I am not so sure it is. We will see.
Prediction simply isn’t good enough. It may work well for google and Facebook, but that’s because failure is relatively harmless in advertising. No one dies just because you see a commercial for something you just bought. The failure rate is simply too high for us in the public sector, maybe that’ll change, but probably not. I say that because we’re severely limiting the access to data these years over privacy concerns. You could probably do some interesting things with medical data, but to get there, you’ll need to look at medical data and that’s just not happening in the current political climate.
Then there is the automation. IBM wanted to sell us Watson analytics on the premise that it could recognise patterns and build the BI models our dedicated team does. So we let them try, and none of the models they came up with was even remotely useful. I can see this changing, but when? And what will our analytics department look like by then? It’s hard to say.
You can make a prediction it might be true but it doesn't mean that you will be successful with it. I wouldn't be surprised is 90% of all BI predictions doesn't actually lead to more successful outcomes also in the public sector also in Denmark (I'm Danish too :) )
Time to rebrand your analysts as your in-house "big data machine learning AI system"!
On a related note: I have no idea what Denmark is like for this, but I have seen other public sector analytics where the analysts themselves where good at what they did, but the whole system wasn’t very good at managing the data they consumed. This can introduce impressive delays and missed opportunities.
Would it be possible to give more details about how you evaluated the machine learning system and the human effort?
e.g. did you have some idea about which, or how many, cases were missing the form of interest, etc?
This of course presented us with unique data on the results. The things we measured were quality, speed, economy and employee satisfaction.
Quality was measured by keeping track of casefiles, that were flagged as missing the document by each process. That gave us two lists, one with the casefiles found by the manual team and one found by the ML team. We then made a few random checks of casefiles that appeared on both lists, and we checked every casefile that was only on one list. The ML team flagged more casefiles correctly, it both found more and made fewer errors. Of course this doesn’t tell us how many casefiles we didn’t find, but it does show us that ML was better.
Speed was relatively simple, it was start to finish and ML was faster. We did rent a lot of iron in Azure to achieve this, we could have never done it without a major enterprise cloud agreement. We needed Microsoft to allocate the stuff we needed, it wasn’t even a simple task of using the automatic systems.
Economically it’s a bit of a touchy subject. I won’t go into details on that, but basically we know what work costs. Renting iron in Azure wasn’t expensive compared to having that many full time workers dedicated to the task.
Employee satisfaction is hard, but we don’t have people on staff who’s job it is to go through half a million casefiles and look at millions of documents. We had to pull people away from their regular jobs to do so, and even with 7000 people on staff, it’s really hard to find people who actually want to do this kind of work. HR did a bunch of HR magic, and basically people would prefer ML to do this sort of thing in the future.
If the sample data is representative of the population, then it is unbiased in machine learning terms, even if the population as a whole is prejudiced, or biased, which will be duly reflected in the learned model, which shouldn't surprise anyone with at least a rudimentary knowledge of machine learning.
"The reason is not only because race is used as a factor in assigning risk (not unlike Amazon’s system picking “female” as a proxy for “inadequate”), but also because lenders are obviously incentivized to find new ways to make money on loans and insurance rates. "
This is complete horse shit. I've done this sort of risk assessment for banks. Race is absolutely not used as a feature in assigning risk, and you have to work REALLY HARD to remove things such as zip codes which are simple proxies for race, AND you have to have an explainable model (GLM, decision tree, 2-3 layer neurons, etc) which you can defend as not being racist. This is a legal requirement, and I assume it is possible that banks have done overtly racist things in the past, but they certainly don't do this now: it's something that could cost them money.
The over all point is legit: AI isn't real and anyone who thinks it is should have their head examined, machine learning is gratuitously overhyped, and its use by powerful groups is what you should fear. How he got there is not legit.
Whooohaaa! ARTIFICIAL INTELLIGENCE is not real, but there is a lot of AI TECHNOLOGY that is real and useful - for example the knowledge graph that Google uses, Bayesian modelling, economic and social simulation and so on.
Both AI and ML are overhyped at the moment, data quality and knowledge engineering are killer problems, but there will be incremental improvements in both of these continually from now on, and this will enable both AI and ML tech to be much more easily applied than now.
Those are not "AI TECHNOLOGY" -they're just math (the examples you list are basically all linear regression). Treating "AI" anything as if it were an actual subject outside of very, very obscure research projects is about 90% of the hype problem here.
The use of the term is very much history repeating itself: people actually thought "AI" was a real thing back in the 1980s -the hype back then was even more ridiculous, and it was based on stuff which amounted to "we will use interpreters and parse trees."
General or "conscious" AI, the stuff we imagine in sci-fi movies, isn't anywhere close to being realized. And it may never be created for all we know. Conscious AI is on the level of free fusion energy. We don't even know if it is possible.
As the poster said, most of them are just statistics; the term has been used in industry and academia for _centuries_.
Predictive models based on accumulated data is just what statistics is applied to do. Calling it AI academically is a big stretch, calling it AI in industry is a marketing technique.
> Who gave you the authority to decide on behalf of the world what the phrase AI means?
No, they absolutely are not: this is why we have a separate term and subject known as "machine learning." It is only pointy headed managers and regurgitators of public relations press releases (aka modern journalists) who refer to machine learning as "AI." Oh yeah, I guess I should include people who think the latter are some kind of authority on the topic.
From the damn Miriam Webster dictionary of the English language:
: a branch of computer science dealing with the simulation of intelligent behavior in computers
2 : the capability of a machine to imitate intelligent human behavior
machine learning noun
Definition of machine learning
: the process by which a computer is able to improve its own performance (as in analyzing image files) by continuously incorporating new data into an existing statistical model"
But the complaint is that "computer intelligence" is the same sort of god-of-the-gaps concept as "human intelligence". Is tool use strictly human? Tool invention? Meta-tool use? Observational learning? Language? Coining new words? In the same way that the list of human-only behaviors has spent a century in retreat, the tasks which constitute "intelligent behavior" in computers are fundamentally undefined, and outside of the Turing Test they mostly consist of whatever problems appear at least 5-10 years out of reach.
"ML techniques" is literally synonymous with AI, obviously. One is a group of algorithms, the other's an abstract concept. But a great many tasks currently handled under the label ML would have been widely accepted as "AI problems" very recently. A computer winning Jeopardy would have sounded like a shocking AI advance in 1997 when Deep Blue won, let alone in 1964 when the show debuted. In 2011, people scrambled to be fastest and most scathing in labelling it "not real intelligence" and mocking it for mistakes smaller than many human errors. The difference is usually semantic, and some responsibility for the sliding definition certainly lies with hype-happy journalists and over-optimistic AI researchers. (Had Watson won Jeopardy back in 1980, it would have roughly matched Minsky and Simon's predicted timelines.)
But that distinction is increasingly used for the very specific equivocation which enables this article. "AI isn't doing that", meaning "it's ML instead", is conflated with "AI isn't doing that" meaning "that isn't happening". If AI doesn't take your job, but a deep-learning network developed by AI researchers does, it seems pretty fair to complain that the two terms aren't actually distinct.
“All the impressive achievements of deep learning amount to just curve fitting,” - Judea Pearl
I think the goalpost moving that's gone on with AI (deep blue, AlphaGo, etc) is as ridiculous as the 'its overhyped and just curve-fitting', when no ones settled into the difficult task of rigorously defining what 'intelligence' is.
So consider the possibility that all forms of intelligence are some flavor of curve fitting. Is that such a problem? If that were the case, what would be so wrong with calling non-biological systems which can optimize curve fitting 'intelligent, artificial'?
For a man of his experience, I am surprised that he is forgetting the incremental nature of science. Most high level researchers are not visionaries but are really damn good at what they do. Their work will provide the building blocks for game-changing advances.
His goal is to draw attention to the loftier problems in AI but doing so by belittling the work of others is immature and ineffective as a motivational tool. Who’s going to switch gears to exploring radically different approaches because what we’re doing now is so boring and lame? I would have loved to read more about his proposed avenues of research instead of how unimpressed he is with curve fitting.
Note how these have concrete descriptions and you don't have to call it AI—which, again, is meaningless. Correspondingly, referring to "AI" as a real thing (including calling search and modeling algorithms AI) is a red flag.
> economic and social simulation
What relation does this have to AI?
One might claim (I presume author does and this is perfectly ok) that this is a bad thing that there are "bad neighborhoods" and we should fight it, but what does it have to do with proper calculation of insurance fee? Should we expect that AI will apply some "social justice" concepts while calculating insurance price. I am not sure what author proposes.
It's kind of odd to think this would happen automatically. There is some kind of wishful thinking going on that technology can't have bad effects if you don't have bad intentions.
Then again, if AI's aim is to perform activities until now restricted to humans, just like humans, how could it not be prejudiced?
And who decides how AI should make decisions that are politically loaded?
I don't think this author laid it out very well, but the problem of bias in machine learning is real and difficult to handle. The Amazon resume fiasco is a textbook example of what can go wrong with practices I see all the time. In my experience ML practitioners rarely stop and ask themselves the hard questions about this. They either don't think of it at all or they don't think we should be asking those questions.
"The model is accurate" shouldn't be sufficient.
The thing is, if the training data was generated by human behaviour, warts and all, then it is going to represent our prejudices. If this is a concern, then it seems to me that ML might not be the right technology to make decisions that have a political dimension.
If we want the system's predictions to be PC, then a rules based system might be more appropriate, blast from the past, unless we start labeling samples, with their output as an extra feature, as "PC" and "Not PC".
I'll tell you exactly what it means:
It means that the software's own vendor is so uncertain of the actual value of what the product does that they've decided to promote how it does what it does - in other words, implementation details - instead.
And nobody who works there actually knows that, either.
Sadly, there's a whole new crop of students being sold this hype. Eventually the money is going to dry up for this line of work IMO, and their going to find themselves behind the curve.
Some of them even had the ability to self learn and adjust to new data.
At the end of the day, AIs and MLs are just new labels for slightly more advanced "algorithms". I think everyone would be a bit better off if we tempered our expectations around the technology to "better algorithms".
I think most people, definitely author of the piece, miss the whole core impact of ai. Ai is basically a new way to process and calculate data, a new type of algorithm.
We can now identify objects in pictures using computer vision with deep learning. This was NOT POSSIBLE with the tech before. A computer can now beat a team of human ai players in dota. This was definitely NOT POSSIBLE before.
At the end of the day this is the impact and its not a small impact. It opens a lot of possibilities to what can now be done. To me it's far more impactful blockchain will ever be. To some degree I'm actually quite happy with all the misleading hype because the people who understand and are capable of using the technology are quietly making real differences behind all the smoke and cloud
After all, deep learning is fundamentally a continuation of much older techniques. And we absolutely could identify objects in pictures, we’ve been doing that for 40 years now - deep learning techniques have given a very nice jump in accuracy for some tasks but they didn’t come out of nowhere.
I agree the whole labelling “AI” is problematic. That is also a many decades old problem though...
But 10% growth YOY compounds very quickly.
Except for the systems that play DOTA, Go, and Starcraft - those are something else.
They're not something else, they're still just really, really impressive lookup tables.
But there are so many that try to shoehorn the full trustless model onto an idea that obviously still has a central authority. Argh.
Is there a good resource I can hand to non-technical people that explains why "full blockchain" only makes sense for a handful of use cases?
> This is a tired trope, that “business people” are brainless and gullible. Almost as tired as the “VCs are so dumb they’ll throw money at anything-AI” idea repeated in the article.
> Maybe, just maybe, the people running multi-billion-dollar companies, and multi-billion-dollar investment funds, are not stupid?
> It would be much more interesting and productive to discuss what they see in AI and why they feel so much urgency, rather than dismissing them as fools falling for magic.
VCs only need one big winner out of every ten investments. You are looking at their nine failures and calling them stupid and naive.
I can answer that question without asking. People can see the potential to do more work with less people and make truckloads of cash. Everyone is scared of being the stupid one and be left holding their dick whilst everyone else gets rich. When you have cash to spend it's a small risk to take.
I don’t think anybody would argue with an article that said some percentage of any population makes stupid decisions for stupid reasons. But then again, nobody would read that article, either.
Last year my working group within our company secured a major contract; largest for our company for the year. We expected we would have to hire 20+ entry level positions to execute on the contract. While the process was fundamentally based on ML, we knew there would be a large quantity of human labor as well to execute on time.
My colleagues and I, instead of going on a hiring spree, asked the overlords for 3 months of overhead time to develop 'intelligent automation' to reduce the number of new / temporary hires. We were able to make substantial enough gains to not have to bring on any new hires and complete the work, ahead of schedule and way under budget with our existing crew.
It was adjusting our thinking about how we do our work and incorporating ML at multiple levels in our system to intelligently guide our process that allowed us to eliminate 60% of the new positions that previously would have been generated by this work.
100% ML is coming for you.
Maybe, just maybe, the people running multi-billion-dollar companies, and multi-billion-dollar investment funds, are not stupid?
It would be much more interesting and productive to discuss what they see in AI and why they feel so much urgency, rather than dismissing them as fools falling for magic.
So perhaps not all fools but somewhere between con artist, willfully ignorant and fool.
(Note: this is all from “traditional” industry, I.e the manager at the hammer factory proudly launching initiatives to “use more AI” in the factory. Not tech industry. Not plausible or concrete use of AI)
Does it really have to be those things just because you don’t understand it? Is it possible those people running multi-billion-dollar organizations just know something you don’t, or have a perspective that you don’t?
While it is possible, I'm kind of tired of hearing people defending the strong and powerful. And I'm kind of tired of hearing the perspective of CEOs. It's pretty much all we hear in fact.
People 'running multi-billion-dollar organizations' are like the greek gods to us: capricious, arbitrary, and powerful, but also subject to the same flaws as normal people.
However, they do have one great privilege: that of being completely above the effects of their actions.
They don't need people to defend them. The whole damn system is designed to adulate them. We 'pray' to these people very day without realizing it.
Baking off-the-shelf anomaly detection into the hammer QA process might seem easy and “not true AI” to us, but it solves their problem. Maybe you can even educate them in the process, and explain the differences between AI, ML, DL, different use cases and methods, libraries, etc. I suspect, though, they won’t care because they just want to hit their business goals.
I think FOMO is a real part of the cycle that causes a lot of bets to be made on untried technologies. Technologists might well reflect on how many of those bets are hedges, and what the true cost of failures are.
It’s also worth noting that a significant chunk of the most egregious bullshit I see flung around in the area is coming from technologist driven startups in the space...
People with money (inherited or won gambling) fall on the same intelligence spectrum as everyone else. There is a shallow end of the pool there.
There are plenty of smart business people who understand exactly what "AI" is and how to exploit it. That is the whole point of this article.
Supporting a generalization by saying “there’s a spectrum” is an oxymoron, though.
"AI? Well, I'm still trying just to reach human intelligence!"
I’ve been in R&D at both Google and IBM TJ Watson on projects that were never even tangentially revenue generating. A company with 80,000 employees can easily afford to let hundreds or even thousands of employees operate on non-product focused research.
In fact on orientation day at IBM we were explicitly told that nothing we do will ever be a product, that there’s a firewall between TJ Watson and the rest of IBM and that any products will be rewritten by product teams.
It seems like the author is speculating how things work
Not from experience.
I think his premise is that AI is bad and vague. And its bad because it's bias. But he just picks a few examples with no counter argument and spitballs from there.
AI in my opinion is technology that outperforms humans, often done with learning algorithms instead of explicit rule based algorithms.
Except the AI I have been following is wildly successful in a variety of topics and probably is coming for you. The author is right it could be biased but people are working on these problems and the author is acting as if no progress will ever be made in this relatively young field.
Excuse me? Translation is possibly impossible for interesting work, but certainly nothing we're doing now is even trying. What we're doing right now is the equivalent of "translating" road signs and dinner menus.
It’s not even that the current trend of “machine learning” (ie data processing) is at best a tiny subset of what anyone would consider to be actual intelligence (ability to act rationally in a wide range of situations, including novel ones, ie common sense).
It’s that the very idea of “artificial intelligence” is impossible. Terry Winograd, one of the earliest AI researchers in the 60s/70s wrote a great book about this called Computers and Cognition in which he lays out a coherent theory, grounded in philosophy and biology, for why the pursuit of machines that think like humans is doomed to fail. The crux of the argument is that the essence of intelligence/common sense has to do with something other than symbolic representation, ie the kind of thinking one does while playing chess or using language. It has to do with “being-in-the-world”, which sounds weird but is a really useful concept from Heidegger.
Winograd gives the example of a man using a hammer to illustrate this. A man using a hammer has no mental model of a hammer while he is pounding in a nail. The hammer simply becomes an extension of his arm. The only time a representation or mental construct of “hammer” becomes relevant is if there is a breakdown, like if the hammer slips. But otherwise, the intelligent act of hammering occurs without any “hammer” objects in the mind of the hammerer.
I’m vastly oversimplifying here, but the book (first published 1985) is quite well argued and frankly persuasive. Highly recommended to all those who seek to separate AI fact from fiction in these buzzy times.
James Mickens is awesome BTW.
The data science team at a major bank I used to work for historically had a team that sourced and cleaned the data, then a team who would explore and build the models and then a team who would learn and run the models going forward. In my opinion the final team was the most important and the most tragic for being condensed, they noticed when the models needed to return to the model builders to be adjusted when missing the mark.
The "lost jobs" in this case is that when I was on the team we had to learn the entire bank database structure and source our own data, build the models and then automate them in such ways to "catch" when they were missing the mark.
The team will be further shrunk as the software tools provided have better auto-sql for sourcing the data, automated model building functions, and automated visualizations thus removing even that as a special skills.
In the most simple terms it works like this:
1. Workers create Automated systems.
2. Automated systems create wealth.
3. Wealth creates workers.
And the cycle repeats, ad infinitum.
So long as there are people with money, there will be jobs to be done. Perhaps in the future, AI employers will hire human workers that are trained by AI educators to do human tasks. There may come a time where AI/Robotics are better than humans at every task, and when that day comes, humans will compete on price, and when they're priced out of the market, they'll either merge with the machines or simply be left behind. But that time is many hundreds of years away, and I'm not convinced that machines would even want to stay on Earth. Space is much more conducive to a machine society.
Lives can be ruined during that eventually, and some don’t make it.
Later we collectively have always recovered and found new ways to trade time for money but much like how evolution works, individuals don’t always fare well when the environment changes.
When AI starts really affecting the human job market, we will see advances in AI and VR education that will fill the gap by teaching humans with methods that will be more effective than any in history.
Imagine one-to-one instruction with an intelligent AI that has access to all of human knowledge with infinite patience and works for peanuts. Then pair that with total immersion VR and you have a compelling education platform for training just about anything.
Yah, they're not doing AR. The guy explained it was a marketing gimmick for VCs (though he made it sound nicer than that).
Function approximation for classification tasks is slowly but surely "eating the world."
Planning and sequential decision making based on RL are being hybridized with classical control methods for demonstrated utility.
Reasoning, rapid learning, and useful adaptation are being attacked in hundreds of research papers a month.
Remember that technologies don't arrive they go through a phase transition of "That's not real X" to "That's obvious and boring." with almost nothing in the middle.
Whether you want to call all the above "AI" is irrelevant. Tasks which were recently assumed to require humans have been shown to be tractable with other methods.
Am I being a paranoid biophysicist? This isn't my field. If I use AI, am I not, by definition, ignoring responsibility? At least with a regular computer program, I understand every step of the algorithm, and can judge if its doing the right thing. But with neural networks I'm just feeding data into a black box and hoping the ends justify the means. I don't actually know what patterns it is finding and utilizing.
Well, many companies will throw some money willy nilly at AI 'because' HBR said to, or consultants are pushing it, or they 'know it's coming' and need to start somewhere.
It might be rational to do some experimenting, but many companies don't have a clue really, they're throwing money at it. FB and G obviously have a clue, but even they have so much money and talent they can afford to experiment whereupon the ROI might be way, way off.
I'd say they're a lot savvier than anyone in this thread is giving them credit for.
Usually AI refers to solving real-world problems that humans (or smart animals) are good at that aren't easily solved using traditional algorithms such as sorting/searching, graph algorithms, numerical methods, combinatorial optimization, etc..
Sometimes it also refers to a computer opponent in a game, or the algorithms and strategies used by such an opponent.
There is something to the statement though in that AI seems to be a bit of a moving target: once we have a good algorithm (and input data) to solve a problem (e.g. winning at checkers) conclusively, then it may no longer qualify as "AI" - even if it did for years up to that point!
"As Suncor Energy prepares to shed 400 jobs to prepare for the implementation of driverless ore-hauling trucks, the union representing workers at the company is publicly condemning the decision."
The author forgets here that AI is not static. It’s a moving target.
The examples of AI that exist today may be easy to brush off, but let’s not assume that extends to examples of AI in the future.
In the meantime it is super annoying that simple techniques get marketed as “AI”.
this is a troubling statement, is this just the author speaking or is there data to back it up?