Hacker News new | past | comments | ask | show | jobs | submit login
AI is not coming for you (blairreeves.me)
120 points by ntang 35 days ago | hide | past | web | favorite | 110 comments

I think ML has potential that Blockchain doesn’t, but not in all the areas that are being hyped.

I work in the public sector of Denmark, and we’re targeted by a lot of the hype. Which is worrisome, because it might actually lead to stupid projects if it becomes a political focus. So far it hasn’t though, and blockchain never did, so who knows.

The thing about ML is that all the BI is worse than what we are already doing. Because it’s hyped we’ve naturally done proofs of concepts, with universities and with big tech, and no one has been capable of providing ML based analytics or BI that is even remotely close to what we already do. Because the simple truth is that we have been working with data for four decades. We have full time analysts who do nothing else, and they are simply lightyears ahead of anything ML we’ve seen, and, they can actually explain their results to our politicians and decision makers.

Where ML does work, and the fact that it does separates it from blockchain, is for recognition. We have a lot of data, often in poor quality, and ML can troll through it faster and with higher quality than our human workers. We had to go through every casefile in a specific area, and identify which ones were missing a specific form. A casefile can be 500 A4 pages long, sometimes scanned in really terrible quality, and we had 500.000 of them. It took 12 people 6 months to do so, simultaneously we did a ML poc. It took 3 months to train the algorithm, but only one employee, and once it was done being trained, it took around five hours with a lot of Azure iron to troll through the data. ML had better results than our human effort and it was obviously way cheaper.

So ML and AI may be a lot of hype, but it’s also more than that.

This is the example I use when knowledge workers lose their minds over ML. Do you _enjoy_ menial tasks? If so, I'm sorry; they are probably going to be sucked up by machines.

This is an awesome opportunity to offload busy work & improve your overall "product". What if, for example, these lawyers and paralegals could now spend more time with their clients for less money, or handle more clients, making top quality legal services more affordable for more people? That would be an awesome scenario to see play out, and possibly a major help to those who never had access to legal help.

The first thing to realize is that AI is not coming for you or me or any other, it's coming for a subset of us. The subset of skills which can be rented out to an employer.

AI and ML are just two things that are comming for that subset. Digitalization is also if not a bigger then at least a huge factor.

The entire ecosystem of companies and people who were involved in supporting the music industry were more or less wiped out over the last 20 years. Leaving a thin layer of really successful people and then a huge group of people who makes no money.

And the more things can be digitalized the more they are subject to pattern analysis which means the subsets of your that make you valuable are also easier to replace.

Ironically a cleaning lady is probably the last person to loos her job because it's really hard to replace, but that just means the supply will increase too.

So sure humans are better at BI but that's based on the assumption that how we do BI is the only way to do BI. I am not so sure it is. We will see.

We’ve seen a couple of examples with BI and analytics, I think you can put them into two categories. Prediction and automation.

Prediction simply isn’t good enough. It may work well for google and Facebook, but that’s because failure is relatively harmless in advertising. No one dies just because you see a commercial for something you just bought. The failure rate is simply too high for us in the public sector, maybe that’ll change, but probably not. I say that because we’re severely limiting the access to data these years over privacy concerns. You could probably do some interesting things with medical data, but to get there, you’ll need to look at medical data and that’s just not happening in the current political climate.

Then there is the automation. IBM wanted to sell us Watson analytics on the premise that it could recognise patterns and build the BI models our dedicated team does. So we let them try, and none of the models they came up with was even remotely useful. I can see this changing, but when? And what will our analytics department look like by then? It’s hard to say.

Yes but you also have to think about predictions a little more nuanced.

You can make a prediction it might be true but it doesn't mean that you will be successful with it. I wouldn't be surprised is 90% of all BI predictions doesn't actually lead to more successful outcomes also in the public sector also in Denmark (I'm Danish too :) )

I think what the post was going for is that things are often mislabeled as AI when really they're much simpler algorithms.

"We have full time analysts who do nothing else, and they are simply lightyears ahead of anything ML we’ve seen, and, they can actually explain their results to our politicians and decision makers."

Time to rebrand your analysts as your in-house "big data machine learning AI system"!

This is a good point, often missed or buried in the noise. It goes beyond recognition though (e.g. more general classification). Another area is when you have fairly large sets with both breadth and depth, ML algorithms can sometimes pull out connections it is very difficult for analysts to even tackle.

On a related note: I have no idea what Denmark is like for this, but I have seen other public sector analytics where the analysts themselves where good at what they did, but the whole system wasn’t very good at managing the data they consumed. This can introduce impressive delays and missed opportunities.

>> ML had better results than our human effort and it was obviously way cheaper.

Would it be possible to give more details about how you evaluated the machine learning system and the human effort?

e.g. did you have some idea about which, or how many, cases were missing the form of interest, etc?

We knew upfront that ML was going to be a proof of concept and that the job was going to be done manually either way. We are many things in the public sector, but we’re not big risk takers, and this was a job that had to be done as fast as possible because the missing documents are required by law. So what we did was that we cloned the data, and let the business do it’s thing while we did ours.

This of course presented us with unique data on the results. The things we measured were quality, speed, economy and employee satisfaction.

Quality was measured by keeping track of casefiles, that were flagged as missing the document by each process. That gave us two lists, one with the casefiles found by the manual team and one found by the ML team. We then made a few random checks of casefiles that appeared on both lists, and we checked every casefile that was only on one list. The ML team flagged more casefiles correctly, it both found more and made fewer errors. Of course this doesn’t tell us how many casefiles we didn’t find, but it does show us that ML was better.

Speed was relatively simple, it was start to finish and ML was faster. We did rent a lot of iron in Azure to achieve this, we could have never done it without a major enterprise cloud agreement. We needed Microsoft to allocate the stuff we needed, it wasn’t even a simple task of using the automatic systems.

Economically it’s a bit of a touchy subject. I won’t go into details on that, but basically we know what work costs. Renting iron in Azure wasn’t expensive compared to having that many full time workers dedicated to the task.

Employee satisfaction is hard, but we don’t have people on staff who’s job it is to go through half a million casefiles and look at millions of documents. We had to pull people away from their regular jobs to do so, and even with 7000 people on staff, it’s really hard to find people who actually want to do this kind of work. HR did a bunch of HR magic, and basically people would prefer ML to do this sort of thing in the future.

Thank you for the thorough reply.

Just came back from the IOHKSummit in Miami with Stephen Wolfram and Caitlin Long as guest speakers. An incredible array of scientists (cryptography, theoretical computer science, mathematics, functional programming languages) presented their research and results of peer reviewed papers over the past few years. It was definitely the most substantial blockchain conference I have had a chance to attend in years. Fascinating talks on the ability to use standardized financial contracts (ARCTUS) via functional languages and gathered as library for developers in smart contracts blew my mind. The supply chain pro/con by Wyoming’s ranchers and their beef tracking USDA approved new standard showed the other end of the spectrum in terms of real world applications that already have an impact on industry. While there is a lot of hype and mania any time money is involved there is also an incredible universe of academic and tangible private progress being made using these distributed immutable / trustless (to higher degrees at least) ledgers.

if by "blockchain" people mean "git for database records", that has obvious uses, but it existed before the craze. See the flowchart from NIST: https://twitter.com/arnabdotorg/status/1049116699927171077

Maybe it's just me, but it seems to me that the author doesn't have a clue of what he is writing about. For instance, when he mentions the Amazon hiring AI embarrassment, it seem to me like he mixes the very different concepts of bias in machine learning with the everyday concept of bias as prejudice.

If the sample data is representative of the population, then it is unbiased in machine learning terms, even if the population as a whole is prejudiced, or biased, which will be duly reflected in the learned model, which shouldn't surprise anyone with at least a rudimentary knowledge of machine learning.

It's absolutely not just you: the author is completely clueless and is basically making shit up. For example:

"The reason is not only because race is used as a factor in assigning risk (not unlike Amazon’s system picking “female” as a proxy for “inadequate”), but also because lenders are obviously incentivized to find new ways to make money on loans and insurance rates. "

This is complete horse shit. I've done this sort of risk assessment for banks. Race is absolutely not used as a feature in assigning risk, and you have to work REALLY HARD to remove things such as zip codes which are simple proxies for race, AND you have to have an explainable model (GLM, decision tree, 2-3 layer neurons, etc) which you can defend as not being racist. This is a legal requirement, and I assume it is possible that banks have done overtly racist things in the past, but they certainly don't do this now: it's something that could cost them money.

The over all point is legit: AI isn't real and anyone who thinks it is should have their head examined, machine learning is gratuitously overhyped, and its use by powerful groups is what you should fear. How he got there is not legit.

>AI isn't real and anyone who thinks it is should have their head examined, machine learning is gratuitously overhyped

Whooohaaa! ARTIFICIAL INTELLIGENCE is not real, but there is a lot of AI TECHNOLOGY that is real and useful - for example the knowledge graph that Google uses, Bayesian modelling, economic and social simulation and so on.

Both AI and ML are overhyped at the moment, data quality and knowledge engineering are killer problems, but there will be incremental improvements in both of these continually from now on, and this will enable both AI and ML tech to be much more easily applied than now.

> the knowledge graph that Google uses, Bayesian modelling, economic and social simulation and so on...

Those are not "AI TECHNOLOGY" -they're just math (the examples you list are basically all linear regression). Treating "AI" anything as if it were an actual subject outside of very, very obscure research projects is about 90% of the hype problem here.

The use of the term is very much history repeating itself: people actually thought "AI" was a real thing back in the 1980s -the hype back then was even more ridiculous, and it was based on stuff which amounted to "we will use interpreters and parse trees."

I think you are confusing "General AI" ( or conscious AI ) with AI. AI is most certainly real and is used everywhere from trading software to text expansion to mortgage qualifications to chess to self driving cars and beyond. AI is everywhere.

General or "conscious" AI, the stuff we imagine in sci-fi movies, isn't anywhere close to being realized. And it may never be created for all we know. Conscious AI is on the level of free fusion energy. We don't even know if it is possible.

Who gave you the authority to decide on behalf of the world what the phrase AI means? You don’t get to arbitrarily redefine common words and phrases. Machine learning techniques are AI, as the term has been used in industry and academia for decades.

It's well established usage that "real AI" is whatever doesn't exist yet, and "today's AI" is actually just math

As a rather frustrated AI/ML professor once described it to me: "AI just means 'unsolved'. Everyone agrees we're working on AI problems as long as we're stuck, but once we're not they decide it was actually just statistics. I wonder if they'll ever notice it was always statistics?"

"AI is whatever hasn't been done yet." - Larry Tesler (sometimes quoted as Tesler's Theorem.)

> Machine learning techniques are AI, as the term has been used in industry and academia for decades.

As the poster said, most of them are just statistics; the term has been used in industry and academia for _centuries_.

Predictive models based on accumulated data is just what statistics is applied to do. Calling it AI academically is a big stretch, calling it AI in industry is a marketing technique.

I repeat:

> Who gave you the authority to decide on behalf of the world what the phrase AI means?

> Machine learning techniques are AI ...

No, they absolutely are not: this is why we have a separate term and subject known as "machine learning." It is only pointy headed managers and regurgitators of public relations press releases (aka modern journalists) who refer to machine learning as "AI." Oh yeah, I guess I should include people who think the latter are some kind of authority on the topic.

From the damn Miriam Webster dictionary of the English language:

"artificial intelligence: : a branch of computer science dealing with the simulation of intelligent behavior in computers 2 : the capability of a machine to imitate intelligent human behavior

machine learning noun Definition of machine learning : the process by which a computer is able to improve its own performance (as in analyzing image files) by continuously incorporating new data into an existing statistical model"

There's a distinction to be drawn, certainly. If you walk into a university CS department, entry-level AI and ML are likely to be separate entries on the class roster, and you can make useful predictions about what you'll find in each course.

But the complaint is that "computer intelligence" is the same sort of god-of-the-gaps concept as "human intelligence". Is tool use strictly human? Tool invention? Meta-tool use? Observational learning? Language? Coining new words? In the same way that the list of human-only behaviors has spent a century in retreat, the tasks which constitute "intelligent behavior" in computers are fundamentally undefined, and outside of the Turing Test they mostly consist of whatever problems appear at least 5-10 years out of reach.

"ML techniques" is literally synonymous with AI, obviously. One is a group of algorithms, the other's an abstract concept. But a great many tasks currently handled under the label ML would have been widely accepted as "AI problems" very recently. A computer winning Jeopardy would have sounded like a shocking AI advance in 1997 when Deep Blue won, let alone in 1964 when the show debuted. In 2011, people scrambled to be fastest and most scathing in labelling it "not real intelligence" and mocking it for mistakes smaller than many human errors. The difference is usually semantic, and some responsibility for the sliding definition certainly lies with hype-happy journalists and over-optimistic AI researchers. (Had Watson won Jeopardy back in 1980, it would have roughly matched Minsky and Simon's predicted timelines.)

But that distinction is increasingly used for the very specific equivocation which enables this article. "AI isn't doing that", meaning "it's ML instead", is conflated with "AI isn't doing that" meaning "that isn't happening". If AI doesn't take your job, but a deep-learning network developed by AI researchers does, it seems pretty fair to complain that the two terms aren't actually distinct.


“All the impressive achievements of deep learning amount to just curve fitting,” - Judea Pearl

Ok. So can you make an argument that any kind of intelligence or learning is anything more than curve fitting.

I think the goalpost moving that's gone on with AI (deep blue, AlphaGo, etc) is as ridiculous as the 'its overhyped and just curve-fitting', when no ones settled into the difficult task of rigorously defining what 'intelligence' is.

So consider the possibility that all forms of intelligence are some flavor of curve fitting. Is that such a problem? If that were the case, what would be so wrong with calling non-biological systems which can optimize curve fitting 'intelligent, artificial'?

I had the identical sentiment after reading that dismissive oversimplification. Here’s the source interview: https://www.quantamagazine.org/to-build-truly-intelligent-ma....

For a man of his experience, I am surprised that he is forgetting the incremental nature of science. Most high level researchers are not visionaries but are really damn good at what they do. Their work will provide the building blocks for game-changing advances.

His goal is to draw attention to the loftier problems in AI but doing so by belittling the work of others is immature and ineffective as a motivational tool. Who’s going to switch gears to exploring radically different approaches because what we’re doing now is so boring and lame? I would have loved to read more about his proposed avenues of research instead of how unimpressed he is with curve fitting.

I would cite Professor Pearl's work as some of the most interesting artefacts derived from the program of Artificial Intelligence. Causal modelling is a big deal! Causal models are one of the things I call AI technology.

I describe them as AI technology because they are spinoffs from the program of research (ongoing, nascent) into Artificial Intelligence. Reasoning over knowledge graphs is not linear regression. Bayesian Modelling is not linear regression (although you can do Bayesian Linear Regression) Simulation systems are not linear regression. Google translate is not linear regression. AlphaZero is not linear regression. None of the above are Artificial Intelligence, they are not trivial either.

> for example the knowledge graph that Google uses, Bayesian modelling

Note how these have concrete descriptions and you don't have to call it AI—which, again, is meaningless. Correspondingly, referring to "AI" as a real thing (including calling search and modeling algorithms AI) is a red flag.

> economic and social simulation

What relation does this have to AI?

I have the same impression. If AI system figured out that the risk of the car being stolen when someone lives in a certain place is bigger and adjusted fee to be higher, then it means AI worked correctly.

One might claim (I presume author does and this is perfectly ok) that this is a bad thing that there are "bad neighborhoods" and we should fight it, but what does it have to do with proper calculation of insurance fee? Should we expect that AI will apply some "social justice" concepts while calculating insurance price. I am not sure what author proposes.

I'm not impressed with the article, but the idea here is that some forms of bias are okay, others are not, and it's up to the people working on the system to build it properly. There are laws about these things and you can't expect some algorithm to know the law automatically and avoid prohibited discrimination without going through the effort to build it in.

It's kind of odd to think this would happen automatically. There is some kind of wishful thinking going on that technology can't have bad effects if you don't have bad intentions.

Or maybe I am wrong and the author is right. After all, he claims that the problem is not AI but the humans, and maybe we shouldn't blame systems trained on human data, which reflects all human's prejudices.

Then again, if AI's aim is to perform activities until now restricted to humans, just like humans, how could it not be prejudiced?

And who decides how AI should make decisions that are politically loaded?

Remember that the prediction is from the rear view window - on data that has been gathered in the past, and that now you are fixing risk in the future.

I don't think he was mixing his terms. He was clearly talking about the everyday concept of bias, not statistical bias.

I don't think this author laid it out very well, but the problem of bias in machine learning is real and difficult to handle. The Amazon resume fiasco is a textbook example of what can go wrong with practices I see all the time. In my experience ML practitioners rarely stop and ask themselves the hard questions about this. They either don't think of it at all or they don't think we should be asking those questions.

"The model is accurate" shouldn't be sufficient.

Well, maybe I misinterpreted what the author meant. In fact, I was a bit casual myself: statistical bias, which is what I meant, would have been more accurate than what I wrote, machine learning bias (which might be interpreted as the bias in "bias and variance", which has nothing to do with the discussion).

The thing is, if the training data was generated by human behaviour, warts and all, then it is going to represent our prejudices. If this is a concern, then it seems to me that ML might not be the right technology to make decisions that have a political dimension.

If we want the system's predictions to be PC, then a rules based system might be more appropriate, blast from the past, unless we start labeling samples, with their output as an extra feature, as "PC" and "Not PC".

> No one has any idea what “artificial intelligence” even means.

I'll tell you exactly what it means:

It means that the software's own vendor is so uncertain of the actual value of what the product does that they've decided to promote how it does what it does - in other words, implementation details - instead.

> promote how it does

And nobody who works there actually knows that, either.

Succinctly put.

Sadly, there's a whole new crop of students being sold this hype. Eventually the money is going to dry up for this line of work IMO, and their going to find themselves behind the curve.

Do we all remember "algorithms"? Like, when every company had an "algorithm" as part of the pitch?

Some of them even had the ability to self learn and adjust to new data.

At the end of the day, AIs and MLs are just new labels for slightly more advanced "algorithms". I think everyone would be a bit better off if we tempered our expectations around the technology to "better algorithms".

Do we all remember when it was called “AI” in the 70’s? To some degree this stuff is just cyclical.

I think the term ai has been completely abused. In fact I try to avoid using the word ai if I can help it and use deep learning instead because that's really the whole crux of this new tech.

I think most people, definitely author of the piece, miss the whole core impact of ai. Ai is basically a new way to process and calculate data, a new type of algorithm.

We can now identify objects in pictures using computer vision with deep learning. This was NOT POSSIBLE with the tech before. A computer can now beat a team of human ai players in dota. This was definitely NOT POSSIBLE before.

At the end of the day this is the impact and its not a small impact. It opens a lot of possibilities to what can now be done. To me it's far more impactful blockchain will ever be. To some degree I'm actually quite happy with all the misleading hype because the people who understand and are capable of using the technology are quietly making real differences behind all the smoke and cloud

While there have been some good incremental improvements, I think you are overstating the place of deep learning vis a vis everything in machine learning that came before it. The same thing happened to SVMs in the early 2000s, but with much less industry noise. NB I’m not trying to compare the impact of the two, just noting it was also over stated.

After all, deep learning is fundamentally a continuation of much older techniques. And we absolutely could identify objects in pictures, we’ve been doing that for 40 years now - deep learning techniques have given a very nice jump in accuracy for some tasks but they didn’t come out of nowhere.

I agree the whole labelling “AI” is problematic. That is also a many decades old problem though...

AI developments are more like battery storage improvements. Just 10% improvements year over year, in hardware, in architecture, in algorithms, in data capturing and labeling ... they've just really started to add up.

But 10% growth YOY compounds very quickly.

10 percent would compound quickly if we were seeing that - I suspect most of the recent acceleration has far more to do with data availability than any technical improvements though.

I'm fond of the term 'data modelling' for most 'machine learning' tasks.

Except for the systems that play DOTA, Go, and Starcraft - those are something else.

Those are just reinforcement learning. Again, standard systems, been around for ages, the whole movie WarGames (36 years old) hinges on an RL-based supercomputer gone wild.

They're not something else, they're still just really, really impressive lookup tables.

It mentions the blockchain hype mania as well. I happen to currently be the clearinghouse for lots of people's silly blockchain ideas right now. I mostly have no issue with the ones that are basic "chain of hashed blocks" git style ideas, intended to show some simple sequence of integrity, while still requiring trust. I don't get many of those.

But there are so many that try to shoehorn the full trustless model onto an idea that obviously still has a central authority. Argh.

Is there a good resource I can hand to non-technical people that explains why "full blockchain" only makes sense for a handful of use cases?

From where I am standing it seems as if any resource to explain anything to non-technical people will fall on deaf ears. The reason for this is money. People aren't looking for the best technical solution for their problem. They don't even have a problem. They are looking for the best way to get their hands on money quickly. If people will hand over cash they will do it. This goes for AI too. Many investors are stupid (naively optimistic if I'm being kind) and people are out to get their money.

Repeating my other comment in this thread:

> This is a tired trope, that “business people” are brainless and gullible. Almost as tired as the “VCs are so dumb they’ll throw money at anything-AI” idea repeated in the article.

> Maybe, just maybe, the people running multi-billion-dollar companies, and multi-billion-dollar investment funds, are not stupid?

> It would be much more interesting and productive to discuss what they see in AI and why they feel so much urgency, rather than dismissing them as fools falling for magic.

VCs only need one big winner out of every ten investments. You are looking at their nine failures and calling them stupid and naive.

I agree that saying all VC's are dumb, all the time, is silly. On the other hand, things like Theranos and Ubeam do prove that some investments that some VC's make are spectacularly dumb. One out of every ten still requires basic due diligence, or at least questioning business plans that require setting physics or other basic knowns aside.

I'm only saying what I saw during the hype bubble. People were looking for a reason to use blockchain (or just be able to say blockchain in relation to their company) rather than letting the problem drive the technology choice. This behaviour suddenly disappeared when the bubble burst. There is lots of smart money in VC and the dumb money follows it. Mostly it seems to be from laziness and herd mentality rather than actually being stupid. If you invest a little in everything you're bound to pick a winner. The ratio is skewed way out past 1 in 10 in this scenario.

> It would be much more interesting and productive to discuss what they see in AI and why they feel so much urgency, rather than dismissing them as fools falling for magic.

I can answer that question without asking. People can see the potential to do more work with less people and make truckloads of cash. Everyone is scared of being the stupid one and be left holding their dick whilst everyone else gets rich. When you have cash to spend it's a small risk to take.

Experienced VCs are not the only business people who give out money.

My problem is with the generalization. The article and some comments in this thread paint all VCs and business people as dummies falling for shiny objects.

I don’t think anybody would argue with an article that said some percentage of any population makes stupid decisions for stupid reasons. But then again, nobody would read that article, either.

Ahh, yes, the motivation is somewhat similar here. It's internal projects at a big company. The people proposing wouldn't get money per se, but they want the reputational thing of doing something cool and newsworthy. I'm the first person they have to get past with their (usually) bad idea.

You may want to pick one of the following flow charts:


The "Birch-Brown-Parulava" and Lewis models look very promising. I may try to combine them. Thanks!

My Open Source project focuses on educating mainstream users about blockchain and cryptocurrency with simple videos and content https://www.trycrypto.com

Bruce Schneier has a talk which is probably online and I think has written a blog on how there’s always some of trust in the system needed. Kevin Werbach has written a book on blockchain and probably has shorter pieces.

I want to strongly disagree with the conclusion based on 1-off personal anectdota.

Last year my working group within our company secured a major contract; largest for our company for the year. We expected we would have to hire 20+ entry level positions to execute on the contract. While the process was fundamentally based on ML, we knew there would be a large quantity of human labor as well to execute on time.

My colleagues and I, instead of going on a hiring spree, asked the overlords for 3 months of overhead time to develop 'intelligent automation' to reduce the number of new / temporary hires. We were able to make substantial enough gains to not have to bring on any new hires and complete the work, ahead of schedule and way under budget with our existing crew.

It was adjusting our thinking about how we do our work and incorporating ML at multiple levels in our system to intelligently guide our process that allowed us to eliminate 60% of the new positions that previously would have been generated by this work.

100% ML is coming for you.

Must not have been 20 entry level tech workers, just 20 unskilled workers. 20 entry level tech workers would be worth 2 senior workers, no management team would ever allow this.

I see a lot of similarities between "AI", as understood by business people, and the pursuit of alchemy and the philosophers stone. It seems like a hustle to separate rich dupes from their money by promising them the keys to infinite wealth, immortality, mars colonys, etc. In that respect it is mostly harmless but it can be quite dangerous to the people who take it seriously.

I don't understand why people so seldomly go straight for the knowledge. It seems pretty obvious by now that knowledge is the key itself. Though subjective sciences like Kabbalah help.

This is a tired trope, that “business people” are brainless and gullible. Almost as tired as the “VCs are so dumb they’ll throw money at anything-AI” idea repeated in the article.

Maybe, just maybe, the people running multi-billion-dollar companies, and multi-billion-dollar investment funds, are not stupid?

It would be much more interesting and productive to discuss what they see in AI and why they feel so much urgency, rather than dismissing them as fools falling for magic.

I can only speak for managers I have met: what they “see” is a promise of cost cutting and an opportunity to tell higher managers that they are ahead of the hype. But never, ever, have I seen one of these people say anything that suggests they have even a vague idea of what AI or ML is or what it can realistically achieve. And I’m not sure they are interested. Because as you say - they aren’t stupid - I just think they are part of a game of BS I don’t understand, involving higher managers, investors etc. I don’t think it’s so much about about producing anything using AI, it’s AI for the sake of saying you are using it.

So perhaps not all fools but somewhere between con artist, willfully ignorant and fool.

(Note: this is all from “traditional” industry, I.e the manager at the hammer factory proudly launching initiatives to “use more AI” in the factory. Not tech industry. Not plausible or concrete use of AI)

You are repeating the trope, just substituting “stupid” with other deriding terms: BS, con artist, willfully ignorant, fool...

Does it really have to be those things just because you don’t understand it? Is it possible those people running multi-billion-dollar organizations just know something you don’t, or have a perspective that you don’t?

> Is it possible those people running multi-billion-dollar organizations just know something you don’t, or have a perspective that you don’t?

While it is possible, I'm kind of tired of hearing people defending the strong and powerful. And I'm kind of tired of hearing the perspective of CEOs. It's pretty much all we hear in fact.

People 'running multi-billion-dollar organizations' are like the greek gods to us: capricious, arbitrary, and powerful, but also subject to the same flaws as normal people.

However, they do have one great privilege: that of being completely above the effects of their actions.

They don't need people to defend them. The whole damn system is designed to adulate them. We 'pray' to these people very day without realizing it.

Updated my answer: these are middle managers such as division or site managers not the top managers of the billion dollar corporations (who might well have great visions but it’s not on them to implement it.). As I clarified, these projcects are always vague initiatives such as “use more ai in our process” or downright marketing stupidity such as “joe, I need you to work AI into the description of our latest hammer model”. Again this is old, traditional industry.

Sounds like a great opportunity to provide value and charge accordingly.

Baking off-the-shelf anomaly detection into the hammer QA process might seem easy and “not true AI” to us, but it solves their problem. Maybe you can even educate them in the process, and explain the differences between AI, ML, DL, different use cases and methods, libraries, etc. I suspect, though, they won’t care because they just want to hit their business goals.

You are right that successful business people are on the whole pretty intelligent. This isn’t always true of investors as individuals, but investment management tends to be reasonable smart too; some very much so.

I think FOMO is a real part of the cycle that causes a lot of bets to be made on untried technologies. Technologists might well reflect on how many of those bets are hedges, and what the true cost of failures are.

It’s also worth noting that a significant chunk of the most egregious bullshit I see flung around in the area is coming from technologist driven startups in the space...

sorry to s--t on your profession bro, but it is what it is.

People with money (inherited or won gambling) fall on the same intelligence spectrum as everyone else. There is a shallow end of the pool there.

There are plenty of smart business people who understand exactly what "AI" is and how to exploit it. That is the whole point of this article.

You don’t have to apologize for generalizations, since they’re so obviously untrue.

Supporting a generalization by saying “there’s a spectrum” is an oxymoron, though.

Ironically the alchemists were trying to create a human being, known as a homoculus.


Inner alchemy feels like that as well!

"AI? Well, I'm still trying just to reach human intelligence!"

The claims in this article that corporations don’t do pure research and every project is targeted at revenue generation is false.

I’ve been in R&D at both Google and IBM TJ Watson on projects that were never even tangentially revenue generating. A company with 80,000 employees can easily afford to let hundreds or even thousands of employees operate on non-product focused research.

In fact on orientation day at IBM we were explicitly told that nothing we do will ever be a product, that there’s a firewall between TJ Watson and the rest of IBM and that any products will be rewritten by product teams.

It seems like the author is speculating how things work Not from experience.

I don't know what this author is talking about and I'm not sure he does either.

I think his premise is that AI is bad and vague. And its bad because it's bias. But he just picks a few examples with no counter argument and spitballs from there.

AI in my opinion is technology that outperforms humans, often done with learning algorithms instead of explicit rule based algorithms.

Except the AI I have been following is wildly successful in a variety of topics and probably is coming for you. The author is right it could be biased but people are working on these problems and the author is acting as if no progress will ever be made in this relatively young field.

>Translation software was once considered by serious people to be “AI” – until it became easy.

Excuse me? Translation is possibly impossible for interesting work, but certainly nothing we're doing now is even trying[0]. What we're doing right now is the equivalent of "translating" road signs and dinner menus.

[0] https://www.theatlantic.com/technology/archive/2018/01/the-s...

Good piece. In some ways the hype is even worse than OP suggests.

It’s not even that the current trend of “machine learning” (ie data processing) is at best a tiny subset of what anyone would consider to be actual intelligence (ability to act rationally in a wide range of situations, including novel ones, ie common sense).

It’s that the very idea of “artificial intelligence” is impossible. Terry Winograd, one of the earliest AI researchers in the 60s/70s wrote a great book about this called Computers and Cognition in which he lays out a coherent theory, grounded in philosophy and biology, for why the pursuit of machines that think like humans is doomed to fail. The crux of the argument is that the essence of intelligence/common sense has to do with something other than symbolic representation, ie the kind of thinking one does while playing chess or using language. It has to do with “being-in-the-world”, which sounds weird but is a really useful concept from Heidegger.

Winograd gives the example of a man using a hammer to illustrate this. A man using a hammer has no mental model of a hammer while he is pounding in a nail. The hammer simply becomes an extension of his arm. The only time a representation or mental construct of “hammer” becomes relevant is if there is a breakdown, like if the hammer slips. But otherwise, the intelligent act of hammering occurs without any “hammer” objects in the mind of the hammerer.

I’m vastly oversimplifying here, but the book (first published 1985) is quite well argued and frankly persuasive. Highly recommended to all those who seek to separate AI fact from fiction in these buzzy times.

Current AI is a marketing term for trained algorithms performing analysis using machine derived statistical models (that's the learning part). The marketing term was created to help companies raise funding, and then taken over by journalists who are tired of eating dirt and write about any possible fear for the clicks revenue. That's where this really took off, in the public media where any idiot can write anything and somewhere an entire sub-culture believes them.

So is Machine Learning. It can be dangerous too:


James Mickens is awesome BTW.

A lot of the discussions of how AI is coming for your jobs follows the same reasoning of how automation is/was, even before it was called machine learning.

The data science team at a major bank I used to work for historically had a team that sourced and cleaned the data, then a team who would explore and build the models and then a team who would learn and run the models going forward. In my opinion the final team was the most important and the most tragic for being condensed, they noticed when the models needed to return to the model builders to be adjusted when missing the mark.

The "lost jobs" in this case is that when I was on the team we had to learn the entire bank database structure and source our own data, build the models and then automate them in such ways to "catch" when they were missing the mark.

The team will be further shrunk as the software tools provided have better auto-sql for sourcing the data, automated model building functions, and automated visualizations thus removing even that as a special skills.

Like automation, the ideal is that workers who are no longer doing the jobs “AI” (aka automated systems) can now do move on to doing higher level or more interesting work. But I agree that there’s often far too much trust that a system can now run itself with little to no checks or humans to tune the system.

So long as any form of automation has existed, people have been claiming that the sky is falling and that automation will put people out of work. And in some respects it does, but in the grand scheme of things, quality of life improves and people find work.

In the most simple terms it works like this:

1. Workers create Automated systems. 2. Automated systems create wealth. 3. Wealth creates workers.

And the cycle repeats, ad infinitum.

So long as there are people with money, there will be jobs to be done. Perhaps in the future, AI employers will hire human workers that are trained by AI educators to do human tasks. There may come a time where AI/Robotics are better than humans at every task, and when that day comes, humans will compete on price, and when they're priced out of the market, they'll either merge with the machines or simply be left behind. But that time is many hundreds of years away, and I'm not convinced that machines would even want to stay on Earth. Space is much more conducive to a machine society.

Well said. Job creation is not the hard part. Eliminating jobs is. Every time a job becomes automated, the workers eventually find other industries/professions to work in, boost aggregate productivity in those other sectors, and thus end up creating more wealth for society as a whole.


“Eventually” is the key word.

Lives can be ruined during that eventually, and some don’t make it.

The same AI that puts people out of work, will be used to make complex tasks simple enough for unskilled workers, as well as training those unskilled workers to be more skilled.

Automation does put individuals out of work. The last industrial revolution had violent protests and resulted in a slew of new legislation and social programs.

Later we collectively have always recovered and found new ways to trade time for money but much like how evolution works, individuals don’t always fare well when the environment changes.

The primary way we adapted to automation in the Industrial Revolution is with public education.

When AI starts really affecting the human job market, we will see advances in AI and VR education that will fill the gap by teaching humans with methods that will be more effective than any in history.

Imagine one-to-one instruction with an intelligent AI that has access to all of human knowledge with infinite patience and works for peanuts. Then pair that with total immersion VR and you have a compelling education platform for training just about anything.

'“AI” is not something anyone needs to be worried about. A world mediated by unaccountable corporate software platforms is.'


Got a recruiting email about a company working in Augmented Reality, remote. Normally I'd ignore recruiting emails, but the idea of sitting at home wearing a hololens sounded too cool to not at least LISTEN to the pitch.

Yah, they're not doing AR. The guy explained it was a marketing gimmick for VCs (though he made it sound nicer than that).

The author is uninformed and wrong. The hype wave propagates faster than the utility wave, but if you believe there was no lightning just because you haven't heard the thunder yet you will be sorely mistaken.

Function approximation for classification tasks is slowly but surely "eating the world."

Planning and sequential decision making based on RL are being hybridized with classical control methods for demonstrated utility.

Reasoning, rapid learning, and useful adaptation are being attacked in hundreds of research papers a month.

Remember that technologies don't arrive they go through a phase transition of "That's not real X" to "That's obvious and boring." with almost nothing in the middle.

Whether you want to call all the above "AI" is irrelevant. Tasks which were recently assumed to require humans have been shown to be tractable with other methods.

I don't know if I agree with the author's argument that AI is bad only because people are bad. I'm aware that garbage in yields garbage out. But I also find myself squirming at the idea of accomplishing company goals by identifying nonlinear patterns in data with networks. Especially if those goals are modifying user behavior, a la watchtime.

Am I being a paranoid biophysicist? This isn't my field. If I use AI, am I not, by definition, ignoring responsibility? At least with a regular computer program, I understand every step of the algorithm, and can judge if its doing the right thing. But with neural networks I'm just feeding data into a black box and hoping the ends justify the means. I don't actually know what patterns it is finding and utilizing.

"Global, monopolistic platforms like Google, Facebook and Amazon do not pursue “AI” as a science project. Nor do hospitals, insurance companies, banks, airlines or governments. They do so for specific, strategic purposes, which in corporate settings are aimed at generating new revenue. "

Well, many companies will throw some money willy nilly at AI 'because' HBR said to, or consultants are pushing it, or they 'know it's coming' and need to start somewhere.

It might be rational to do some experimenting, but many companies don't have a clue really, they're throwing money at it. FB and G obviously have a clue, but even they have so much money and talent they can afford to experiment whereupon the ROI might be way, way off.

I sell AI and ML service-based solutions to large enterprises and if anything they're highly skeptical of AI and ML adding value and are always comparing its ROI to the alternative of basic data management, mastering, out-of-the-box analytics tools.

I'd say they're a lot savvier than anyone in this thread is giving them credit for.

this article is of course reasonable, but what’s depressing is we genuinely have made really incredible strides in machine learning. like enough that it shouldn’t be possible to overhype it. of course people manage though.

"No one has any idea what “artificial intelligence” even means."

Usually AI refers to solving real-world problems that humans (or smart animals) are good at that aren't easily solved using traditional algorithms such as sorting/searching, graph algorithms, numerical methods, combinatorial optimization, etc..

Sometimes it also refers to a computer opponent in a game, or the algorithms and strategies used by such an opponent.

There is something to the statement though in that AI seems to be a bit of a moving target: once we have a good algorithm (and input data) to solve a problem (e.g. winning at checkers) conclusively, then it may no longer qualify as "AI" - even if it did for years up to that point!

Companies try to replace people with machines every time they think its even half possible. A recent example with Suncor:

"As Suncor Energy prepares to shed 400 jobs to prepare for the implementation of driverless ore-hauling trucks, the union representing workers at the company is publicly condemning the decision."


I liked the article. It mentions the fallacy of Objectivity by Indirection: "I didn't say that - the system (I designed) said it!"

I mean, but we've been doing that with mathematical models for years. I mean, just look at the controversy over books like The Bell Curve or Freakconomics.

I think that’s the future. AI will give people something to hide behind. Imagine a US health insurance that declines you because of “AI”. It’s already bad enough the way it is but this will bring it to a whole new different level.

>But we don’t need to “regulate AI.” As we’ve seen, “artificial intelligence” is mostly a constructed catch-all term for lots of different types of technology

The author forgets here that AI is not static. It’s a moving target.

The examples of AI that exist today may be easy to brush off, but let’s not assume that extends to examples of AI in the future.

In the meantime it is super annoying that simple techniques get marketed as “AI”.

> And it turns out that racially discriminatory lending and coverage can be quite profitable – particularly when enabled with technological precision.

this is a troubling statement, is this just the author speaking or is there data to back it up?

Not sure what data you expect to see. People generally don't publish their discriminatory lending and insurance policies. Or do you think that the statement that it's profitable might be incorrect?

there’s decades of history about the practice of redlining in the US (good term to Google), as well as more recent court cases related to subprime lending.

More recently facebook was charged by HUD for allowing people to buy ads and filter out by race, sex, another protected statuses under the fair housing act https://cdn.theatlantic.com/assets/media/files/hud_v_faceboo...

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact