
Why is AI so useless for business? - mebassett
https://mebassett.info/ai-useless-for-business
======
bo1024
AI gets such a bad rap. People only think of unsolved or sort-of-solved
problems as AI, and don't give AI any credit for the problems it has solved, I
guess because by definition those problems seem easy.

Think how much Microsoft Office and competitors have amplified business
productivity over the last 20 years (yeah yeah, make your jokes too). Word and
Powerpoint and Excel are full of AI whether it's spellcheck or auto-fill,
drawing algorithms like "fill with color", etc. So many things that were AI
research papers of the 70s, 80s, or 90s. And those innovations continue today.

Logistics companies rely on huge amounts of optimization and problem-solving.
Finding routes for drivers and deliveries, planning schedules, optimizing
store layouts, etc. -- that's AI.

Employees use AI tools to improve their lives and productivity whether it's a
rideshare or maps app to get to work, speech-to-text, asking Siri for answers,
translating web pages, etc. All of this comes out of research in AI or related
fields.

How many office jobs _don 't_ require someone to use a search engine to find
and access information related to a query? Information retrieval is one
million percent AI.

Robotics and automated manufacturing has been huge for a long time -- robotics
is closely connected AI and related problems like control theory.

The best applications of AI have almost always been to support and enhance
human decisionmaking, not replace it.

~~~
chrisfosterelli
The AI effect: AI gets a bad rap because once something exists and is
practical it's no longer called AI.

[0]:
[https://en.wikipedia.org/wiki/AI_effect](https://en.wikipedia.org/wiki/AI_effect)

~~~
VBprogrammer
This reminds me of the Tim Minchin song Storm. "Do you know what they call
alternative medicine that has been proven to work?...Medicine."

I've never thought of it that way before but alternative medicine and AI have
a lot in common.

~~~
mehrdadn
What're some examples of things that were called "alternative medicine" but
that were later proven to work and regarded as "medicine"?

Edit: Since people are bringing up ancient examples and kind of missing the
point of the question: I'm not looking for examples from the Roman Empire
here. Let's stick with the past < 50 years. Maybe something your parents might
actually remember being dismissed as "alternative medicine" (whatever that
meant at the time) but which now is clearly just accepted "medicine".
Basically, try to find something that's in the spirit of the question. The
goal is obviously to find things that modern medicine actually previously
dismissed and later accepted, not to find a loophole in the question.

~~~
kevin_thibedeau
The Australian doctor who proved ulcers and stomach cancer could be caused by
bacterial infection was dismissed until he inoculated himself and cured it
with antibiotics.

[https://www.discovermagazine.com/health/the-doctor-who-
drank...](https://www.discovermagazine.com/health/the-doctor-who-drank-
infectious-broth-gave-himself-an-ulcer-and-solved-a-medical-mystery)

~~~
mehrdadn
That is incredible and terrifying. Thanks for sharing!

~~~
VBprogrammer
I'm personally very grateful to that doctor. I used to suffer from quite
painful stomach ulcers in my teens. I was scheduled for an endoscopy but the
doctor I was referred to had obviously heard of this research. One week of
antibiotics later and I have never suffered from it again.

------
NalNezumi
I'm going to go against the flow of most comment here and say that it's not
always business misunderstanding AI. Bad labeled data and unclear
goals/expectations sure, but the latter one should be identifiable by a good
ML/Data scientist, if you have any insight to what you can actually deliver.

But most ML/Data Science people have no proper understanding of AI/ML, and
when just traditional "coding" can solve the problem rather than throwing
fancy statistical models and buzzword.

I'm not in business nor AI/ML, but in Robotics. And as a person in Robotics
it's always the same experience working with AI/ML engineers: They first say
they require large amount of data, then give great promises. (but never
specific metrics, except maybe a percentage of success) Then they deliver a
module that fails outside the _perfect_ scope of deployment(works only in the
lab at 1pm). This is ofc never specified in the delivery. Also crucially, it
does not give a good indication of failure. The amount of ad-hoc you need to
add _after_ the thing is delivered is just staggering.

On top of this reality, most ML/Data science peoples response to this entire
process is to point and blame the data, or the "well you guys is expecting too
much from this!" when they had ample time to outline the scope, limitation and
requirement _before_ they even started collecting the data.

~~~
nnq
> have any insight to what you can actually deliver

Maybe that's the problem. In lots of AI/ML problems _you just CAN 'T know
ahead of time what can be deliver_ you need to spend the time and resources to
do it and _then_ see how well it works...

The problem imo is on the business side, most businesses _don 't know how to
transform unpredictable progress into profit_ (even if average on a large
timeframe that progress might be HUGE).

So ML/DS people need to overpromise in order to get anything approved,
otherwise they'd just have to sit around and do nothing, and overall everyone
would be worse too, bc that real but unreliable progress would never happen.

~~~
Jommi
I would love to learn how unpredictability can be leveraged as a positive. Any
tips?

~~~
conjectures
How does insurance work?

~~~
Jommi
Is this not exactly the opposite? They try to bet that they can predict the
occurrence of something better than what you can.

------
ethanbond
I’ve been working in the “real world business processes that companies are
trying to AI-ify” realm for quite a while now. Pharma, cyber security, oil and
gas production, etc.

This article doesn’t mention a really, really straightforward factor for why
AI hasn’t invaded these domains despite billions of dollars being dumped into
them.

An automated process only has to be wrong _once_ to compel human operators to
double or triple check every other result it gives. This immediately destroys
the upside as now you’re 1) doing the process manually anyway and 2) fighting
the automated system in order to do so.

99% isn’t good enough for truly critical applications, especially when you
don’t know for sure that it’s actually 99%; there’s no way to detect which 1%
might be wrong; there’s no real path to 100%; and critically: there’s no one
to hold responsible for getting it wrong.

~~~
ChuckNorris89
Huh, N26, a major online bank in Europe is famous for some of its customers
getting their accounts blocked every time they tweak their ML model and yet is
doing great financially dispite the shitstorm it generates each time.

It's not like Google blocking your email or YouTube account, we're taking
about your friggin bank account here.

I don't know how they're still in business and growing with such a process in
place.

~~~
bsaul
Not working directly in the field, but i believe ML for fraud detection is
very common nowadays..

~~~
saiya-jin
Yeah but usually the bank just buys package solution with some customization
for accessing the data & reporting, so they don't have to keep up the army of
very narrowly specialized folks catching up with ever changing laws etc.

------
Macuyiko
In my opinion, most of the issues leading about AI "failing" in traditional
organizations are due to the following:

(1) Inflated expectations from higher/middle management which trickle down the
organization. AI is seen as a high-profile case which has to lead to success
(and a larger budget next year for my dept.)

(2) Data quality issues. The data itself has issues, but the key issue is lack
of metadata and dispersed sources. Lack of historical labels (or them being
stuck in Excel or on paper) is part of this as well. Big data without any
labels is mostly useless, contrary to expectations

(3) Most AI or ML projects are not about ML. In fact, they're mostly about
automation or rethinking an internal or customer-facing process. In many
cases, such projects could be solved much better without a predictive
component at all, or by simply sourcing a 1 cent per call API. AI is somehow
seen as necessary, however, without which our CX can never be improved. ("We
need a chatbot" vs. "No, you just need to think about your process flow")

(4) Deployment issues and no clean ways to measure ROI leads to projects being
in development indefinitely without someone daring to stop them early. This is
also related to orgs starting 30 projects in parallel (2m lead times with one
to two data scientists for each), which end up all doing kind of the same
preprocessing and all lead to kind of the same propensity model. No one dares
to invest in long-term deeply-impacting projects as "we want to go for the low
hanging fruit first"

~~~
leto_ii
I pretty much agree with all of your points, but I also think there may be a
more fundamental issue at play here. ML doesn't actually "understand" things -
it can do very sophisticated and accurate pattern matching without actually
"knowing" the logic of the patterns it's matching.

This in turn means that it may fail catastrophically when faced with
adversarial examples or with examples that are drawn from a different
distribution than that of the training set.

~~~
falcor84
How is that different from a typical human given a routine boring role?
Specifically in regards to adversarial input, humans are often the weakest
link in terms of process security.

~~~
leto_ii
I wasn't thinking that there are no roles that can be automated, rather that
there are some that can't be.

> Specifically in regards to adversarial input, humans are often the weakest
> link in terms of process security

I have yet to encounter a (healthy) person who looks at a photo of static and
mistakes it for a cat.

~~~
duckmysick
How about a photo of a dress where some people say it's blue and some say it's
gold?

~~~
leto_ii
Optical illusions indeed highlight limitations in human perception. However,
the dress illusion seems to me far less of a problem than mistaking noise for
an object.

More relevant however is that we humans can understand that we're faced with
an optical illusion and we can make adjustments accordingly. We have formed
the concept of an "optical illusion" and we just place "The dress" in that
category. A machine needs to be specifically trained on adversarial examples
in order to be able to predict them. Once you come up with a different class
of adversarial examples it will continue to fail to detect them. There is no
understanding there, just more and more refined pattern matching.

Does a machine that can match any pattern actually "understand"? I would say
no. But these are already philosophical considerations :D

~~~
duckmysick
> More relevant however is that we humans can understand that we're faced with
> an optical illusion and we can make adjustments accordingly.

Broadly speaking, yes. At the same time that's not what happened in 2015. It
produced so much polarizing content with people deeply entrenched in their
believes. They might have recognized it as an optical illusion, but they
refused to make adjustments.

> A machine needs to be specifically trained on adversarial examples in order
> to be able to predict them. Once you come up with a different class of
> adversarial examples it will continue to fail to detect them. There is no
> understanding there, just more and more refined pattern matching.

Moving away from image recognition examples, isn't that exactly what happens
with humans predicting whether an email is a phishing attempt? I remember
reading here on Hacker News this week about phishing tests at GitLab. It had a
lot of comments about tests and training employees to spot adversarial emails.
Some companies are more successful than the others. It is a complicated
problem; otherwise we would have solved it already. But it's the same
principle because phishers come up with different ways of tricking people. And
some people will fail to detect them.

~~~
leto_ii
I would say that there are indeed many examples of things that are hard to
categorize for humans. Sometimes there isn't even a way to categorize things
perfectly (there may be a fuzzy boundary between categories). It is for
example really hard to train people to figure out if a certain stock is going
to be profitable or not - there are many other such examples.

This doesn't mean that the kind of thinking that goes on in the human mind is
the same as the pattern-matching that goes on in an ANN (for example). Think
about how ppl learn to talk. It's not like we expose infants to the wikipedia
corpus and then test them on it repeatedly until they learn. There are
structures in the brain that have a capacity for language - not a specific
language, but language as an abstract thing. These structures are not the same
as a pre-trained model.

The truth is I don't know enough about cognitive science to properly express
what I'm thinking, but I'm pretty sure it's not just pattern matching :D

------
euix
I have been in this space in the financial sector for two years. I think this
article is mostly spot on. There is one other piece, typically the places that
can most benefit from innovation can get most of it just through automation
and RPA as it is now called. Basically some guy filling a spreadsheet and
copying it someone else, replace that with a bot.

But even that and other processes are difficult because a lot of these
corporate enterprises have a bazillion different systems that don't talk to
one another. Forget data science or ML, you really just need a unified data
view. Typically the workflow is some use case comes, somebody, an analyst
manually pulls data from some system via the GUI (because that is all they
interact with). A model is built based on that data set and the project stops
dead in its tracks from there on because it's impossible to get an API to
query for that data from its source system. That is a technology and business
process project and will rapidly blow up into a clustefuck.

The key competitive advantage of these so called "technology" companies is
really this. The ability to expose any part of your data storage and pipeline
to any other part of the organization as a API. Every piece of software is
built with that concept in mind.

~~~
keenmaster
Microsoft recently developed a modular form of Microsoft Office which allows
you to integrate a modifiable spreadsheet into things other than .xlsx files.
Automation may be easier if disparate systems all used the same modules. This
is especially true in the finance sector where everyone uses Excel.

~~~
mkl
Recently? That sounds like OLE [1], which has been around since the 1990s.
Have they redone it?

[1]
[https://en.wikipedia.org/wiki/Object_Linking_and_Embedding](https://en.wikipedia.org/wiki/Object_Linking_and_Embedding)

~~~
keenmaster
Microsoft’s Fluid Framework was announced at this year’s Build conference.
It’s like OLE + Google Docs “on steroids.” Imagine sending an email with an
embedded spreadsheet to multiple recipients who then collaborate and edit in-
line without opening the Excel file. Or PowerPoint, Word, etc... Everyone
would see the edits in real time. The modules are drag and droppable. The
whole framework is also open source, so Microsoft envisions developers making
new applications out of the core concept.

[https://www.theverge.com/2020/5/19/21260005/microsoft-
office...](https://www.theverge.com/2020/5/19/21260005/microsoft-office-fluid-
web-document-features-build)

------
normalnorm
Because "Artificial Intelligence" is a label forever applied to the effort of
replicating some human cognitive ability on machines. A well-known lament goes
something like: "once it's possible, it's no longer AI".

Business is about exploiting what exists. This is why the buzzword is
"innovation", not "invention". Incremental improvements, not qualitative
jumps. So nothing will ever be really considered "Artificial Intelligence"
once it is boring enough for business.

Scheduling algorithms are incredibly useful for business. There was a time
when this was considered AI, but that was the time when they didn't work well
enough to be useful.

------
laichzeit0
> why can't it read a PDF document and transform it into a machine-readable
> format?

> why can't I get a computer to translate my colleague's financial spreadsheet
> into the format my SAP software wants?

Because you probably expect it to be 100% or maybe 99.999% accurate, and we
can't do that. Imagine "AI" translating someones financial spreadsheet into a
different format and dropping a zero somewhere. Oops.. but your test set
accuracy is 99.8984%. Still not good enough. Just getting 1 thing wrong breaks
everything. This is fundamentally different from clicking on image search and
ignoring the false positives.

~~~
thewarrior
Why exactly do you need an AI that reads a PDF ? Unless you’re dealing with
ancient data wouldn’t it be easier to have whatever that’s generating the PDF
return machine readable data ?

This suggests to me that lot of office jobs will be lost just by modernizing
systems and making them spit out JSON

~~~
NikolaeVarius
PDFs do not solely consist of computer generated files. PDFs also come in the
form of paper that have been hand written on or scans.

~~~
thewarrior
Those should be replaced by web apps or mobile apps. Its 2020.

------
tragomaskhalos
Our immediate goal should be to set our sights lower; forget ML, instead
improve and expand technologies like RPA
([https://en.wikipedia.org/wiki/Robotic_process_automation](https://en.wikipedia.org/wiki/Robotic_process_automation)),
which is only "AI" in the narrowest sense.

Example: my wife is an admin in a school office, and a ludicrous amount of her
and her colleagues' time is spent on replicating data entry between a
multiplicity of different incompatible systems. The Rolls Royce / engineer's
solution to this would be to provide APIs for all these disparate systems and
have some orchestration propagating the data between them, except of course
that's never going to be remotely practical; instead, dumbly spoofing the
typing that the workers do into the existing UIs is a far more tractable
approach. My (admittedly not 1st person based) experience of these things is
that they currently still require significant technical input in the form of
programming and "training", but this fruit has got to be hanging a lot lower
than any ML-based approach.

~~~
tyingq
RPA can be a dangerous band aid. It often uses screen scraping or similar
brittle interfaces that are known to change. Or, it doesn't know about certain
error conditions, etc.

Also, if it's been running for months before it breaks, the humans that used
to do the work are gone, or have forgotten how to do it.

~~~
afarviral
At my work we are leaning heavily on RPA to automate away drudgery and
ultimately reduce expendature. However it has been immensely frustrating,
prone to errors and garnered endless suspicion. The experience has been that
bots written by the service desk staff doing the job function better and are
much more under our teams governance, which the official "automation" teams
within our org are painful to deal with due to their lack of availability and
not having first hand knowledge of the things being automated. I think the RPA
approach requires dedicated people developing and monitoring the bots who have
an active part in the process being automated. The difference in approach
determines the result.

~~~
tyingq
I've experienced the same. Centralized RPA teams tend to, for example, do web
scraping when they could easily use an existing REST API. Because they either
don't know it exists, or don't have that skill set.

Similarly, seen things like using a email as a trigger, when the source
application has configurable web hooks.

Feels like there's an RPA culture of sorts to assume the things being
automated only have human based interfaces.

------
jerzyt
A better question would be why AI works great for some business (e.g. Netflix,
AirBnB, Uber, Waze, Amazon) yet fails miserably for other (JC Penney, Sears).
In my view, the older companies are trying to strap on AI on top of a
traditional dataset, which never collected any useful signals. The new
companies designed their entire business concepts around data, and collected
what's needed from the get-go. Sears may have a 100 years worth of useless
data. AirBnB has about 13, but so much more informative. Amazon applies A/B
testing all the time - would anyone at Sears even know what it is?

A secondary issue with business data is that vast majority of the features are
categorical, for example: vendor id, client id, shipper id, etc. These usually
get hot-one encoded, and you end up with hundreds of features where there's no
meaningful distance metric. Random Forest and XGB are about the only that
produce somewhat rational models, but in reality, they are good because they
approximate reverse engineering of business process.

And lastly, the hype far outweighs the possibilities, at least until the
business are ready to re-engineer the processes, if it's not too late.

~~~
MattGaiser
AI works for online businesses where you have millions going to a single
interface and thus any testing is on a random selection of the population.

You couldn't do that as well with different stores simply because malls have
different demographics (meaning any conclusions could be noise) and the costs
of shuffling where things are located is high so you can't just test 50
different setups and see what works.

~~~
jerzyt
Sears has been around for about a 100 years, and they have invented the
catalog sales model - they were the Amazon of that era. I don't blame them for
not doing things like A/B testing a 100 years ago, but 10 or 20? They were
asleep at the wheel. That's when Walmart took off like a space rocket.
Essentially the same offering to the same demographic. Yet Sears collapsed.

~~~
arbitrage
There were other factors at play in the downfall of Sears than just failure to
take advantage of their positions at the time. Sears, in a sense, became a
victim of its own success. In the heyday of parasite capitalism, it became
more profitable for a small group of bad actors to make Sears fail.

Xerox is a more apt example of the 'missed opportunity' narrative.

------
saalweachter
I like to focus on the need for a spec.

The hardest part about programming is that you have to say what you want to
happen clearly and precisely. You can't just say "I want a text editor", you
need to say all sorts of specific things about how the cursor moves through
the text and how you decide what text is displayed on the screen when there is
too much to show all at once and how line-wraps work and whether you
acknowledge the existence of "fonts", and what happens when you click randomly
on every pixel of the display.

The program usually shouldn't _be_ the spec, but you can't write the program
without actually specifying everything that can possibly happen over the
course of that program executing.

One of the things that makes AI/ML so hard is that we don't want to write a
spec, most of the time. If we could write a precise spec that a computer could
understand, we've typically already written the program we want. There are
some cases where we can, like games or math, but most of the time, what we
want to do is provide our AI/ML with a bunch of data and say "you figure out
what I mean". "Label the pictures of dogs", "identify the high-risk loan
applicants", and so forth.

Our AI/ML is actually solving two problems: first, it has to come up with a
spec on its own, and then, it has to create a solution to it.

And here is where things get rough: we generally don't know what spec our
AI/ML came up with. Did we train a model to identify dogs, or to identify dog
collars? Does this model find high-risk loan applicants, or people of certain
ethnic backgrounds?

The problem with many real-world and business applications for AI/ML is that
the spec is really, really important.

~~~
dgb23
One of the core features of a spec is semantics.

We're just getting "42" and are stumped, because we didn't describe the
meaning of the question nor the answer in a formal way.

To apply meaning we need "us" or the real world. And we need to decide its
form.

------
syllogism
> why can't it read a PDF document and transform it into a machine-readable
> format?

It can definitely do that, but you might not like the cost/benefit analysis,
depending on how many such documents you want to process. The costs are coming
down steadily though as the tech improves. If you need to do millions of such
documents, yeah a model will probably be worth it. But if you need to do a few
hundred you probably should just do them manually.

The thing is, reality has surprisingly high resolution. When you give out a
task like this to a person, they will likely come back to you for
clarification about how you want to deal with some of the examples. Your
initial requirements will be underspecified, or incorrect, in some details.
When you are dealing with a person, these minor adjustments are pretty
inconsequential, and so you don't really notice it happening. The worker might
also have enough context to guess what you want and not ask, and just tell you
the summary when they deliver the work.

If you're training a model, you need to work through all these annoying
details about what you want, just as you would when you're creating any other
sort of program. This adds some overhead, and places a lower bound on how many
examples you'll need to have annotated -- you'll always need enough examples
of annotation to actually specify your requirements, including various corner-
cases. You need enough contact with the data to realise which of your initial
expectations about the task were wrong.

So there will always be a lower scaling limit, where the automation isn't
worthwhile for some small volume of work. The threshold is getting lower but
there will always be a trade-off.

~~~
dwighttk
> The thing is, reality has surprisingly high resolution. When you give out a
> task like this to a person, they will likely come back to you for
> clarification about how you want to deal with some of the examples.

Why isn’t this generally solved, though?

~~~
syllogism
What could that even look like? In the extreme case this is like asking, "Why
do I have to write programs, can't the computer just do what I tell it?".
Writing the program _is_ telling it what you want. In supervised machine
learning, annotating the data is, instead.

If you tell me, "Go to ebay and get me a list of the prices of washing
machines", that sounds simple, but then I'm faced with some washer/dryer combo
or some hand-crank contraption. These are things you didn't think of. I can
either ask you, or take a guess and hand you something that needs to be
cleaned up later.

If I'm instead training a model, I need to encounter these tricky examples in
order to ask you for a policy on them. If I'm collecting an unbiased sample of
the training data, this could take a very long time. If there's some sampling
strategy maybe it's faster, but there's still a minimum number of examples we
need to think about, no matter what.

------
raghava
In fact, the author could actually dig further and look at the potential
losses an "AI-fied" solution could bring forth.

1\. Unexplainable algorithms that cannot demonstrate fairness and biased
algorithms - causing firms to be dragged to court for discrimination - where
AI was used for decision making which impacted lives/careers (lending, credit,
recruitment, medical procedure suggestion, financial modeling etc - just to
name a few)

2\. Biased algorithms resulting in small tainted outputs that could later
snowball into a larger loss that get built over slow leaks over time. (Few AI
based cloud app/infra monitoring systems ending up deciding the wrong scaleout
factor/sizing - based on past history but not considering real situational
context/need - resulting in a net loss over a larger time)

3\. Some AIfied solution just outright denying users the level of control
that's really warranted. ("full automatic , no manual" mode). This mostly
happens where the buyer never uses it firsthand but buys based on brochure/ppt
walkthroughs, and real users are disconnected from the decision making ivory
towers. The risk ibeing these systems getting into the way, instead of aiding
productivity, they end up being another JIRA - a hassle one could really do
without.

------
mark_l_watson
I have been an AI practitioner since the 1980s, sort of a fan! That said, I
like this article on several levels most particularly for calling out possible
AI products for business.

I lived through the first AI winter. As effective as deep learning can be,
problems like model drift, lack of explainability, and getting government
regulators to sign off on financial, medical, etc. models are very real
problems.

Two years ago I was at the US Go Open and during a social break I was talking
to a lawyer for the Justice Department and he was telling me how concerned
they were about the legal problems of black box models.

------
TulliusCicero
> We've taught computers to beat the most advanced players in the most complex
> games. We've taught them to drive cars and create photo-realistic videos and
> images of people.

No, we haven't. I mean, we've made progress in those areas, but there's still
a long way to go.

The best AI in Starcraft, AlphaStar, still can't beat the strongest players
without relying on simply out-clicking them.

Driverless cars are still in the testing and development phase, none of them
are smart enough yet for widespread deployment.

------
tempodox
You might just as well ask why that miraculous cure for baldness is so
useless. You let some used-car salesman talk you into believing that it
actually works, but it doesn't — no matter how much you want it to.

------
dave_sullivan
In my opinion, it's because business operations isn't that complicated and
people don't know what AI is.

By "not that complicated", I mean a decent CRM system to track information
about the organization is approaching peak operational efficiency for most
businesses. Most inefficiency I see after that is people/political problems.

By "people don't know what AI is", I mean that business owners are unable to
describe their business problem as a supervised learning problem. If you can
formulate your business problem as a supervised learning problem, then you can
probably solve it with AI (which, yes, is really just a marketing term for
supervised ML).

But most business problems are really "order taking" or "production/delivery"
or "moving things through a funnel" problems and thus AI isn't the solution,
CRM or CRUD apps are the solution.

~~~
dade_
"AI is a 'brain' that you plop down in the organization and it does
'unsupervised learning' to automate your business and delight your clients
with optimized outcomes." \- As explained to me

I now see Google sales dragging these people out to 'AI meetings', which are
about Dialog Flow. (Meeting pitch email gets around, business manager reply
all: 'WTF is this?') Later they summarized the meeting, as they understood it:
"You just need to drag and drop the CRM on the 'Brain' and then you can book
haircut appointments and open a bank account. We were like um, we develop and
support software, they were like - just drag and drop you knowledge management
API on it too." Uh... what KM system?

The complicated part is that in small and midmarket business, their knowledge
is tribal and processes are folklore. There are processes that are followed
though they don't exist, and business rules violated as they aren't aware.

Meanwhile, no one in sales puts useful information in a CRM system if they can
avoid it, the least effort principle on administrivia is important if you are
ever going to hit your ever growing number.

Finally, the management must still be spooging for SPOGs, I sill see marketing
for them as some sort of nirvana. Meanwhile, they have been collecting data
randomly about everything, all over the place, excel documents, FTP servers,
and of course databases. Here, no one in IS/IT will admit: The servers were
here when they started their job and they have no idea what is on them or if
they are even needed anymore. They certainly aren't mentioning that to the
incoming CIO, who gets focused on improving efficiency as per his executive
mandate. So he/she gets to work with cloud migrations (more hilarity ensues)

I need to stop now. It's a rat hole...

------
leto_ii
One thing I haven't seen explicitly mentioned is the (probably) intrinsic
limitations of AI/ML in classifying/predicting human behavior. For a number of
years I have worked on fraud prevention tasks where the goal was to take in
some information about a payment and decide whether it was fraudulent or not.

Even though what we were doing was primitive and could have benefited from a
lot more ML, I suspect that even then the best that you could have gotten
would have been some sort of anomaly detection system that can catch a good
share of the kind of fraud that you have seen in the past, but will never be
very good at detecting an intrinsic change in fraudster behavior.

On top of this, especially when dealing with humans, you often are expected to
be able to explain why a certain decision was made. Setting payments aside,
think of predictive policing or sentencing decisions. In those cases ML is
essentially guaranteed to build in all sorts of biases regarding somebody's
race, gender, place of living etc.

------
dade_
I forget the name of that silly robot in the picture, Ginger or something.
Designed to take food orders and possibly deliver them to tables, they get
dragged out to bank branches and need to be supervised by employees the whole
time. It is bad enough that they could only provide the most trivial of
information (worse than web search), IT also struggled to keep the things
connected to WiFi. Multi-billion dollar corps 'doing AI'.

------
s1t5
The article is just vaguely complaining about undefined problems which makes
it very difficult to defend or argue against. Are we talking about ML?
Optimization problems? Operational research in general? Automation? And which
tasks in which businesses? There's tonnes of useful stuff in each of those
categories and vaguely saying that it's all useless doesn't get you anywhere.

~~~
cerved
Seems pretty exclusively focused on ml

------
rscho
I'm at a hospital where someone at long last got authorized to try ML on the
clinical database.

The ethical committee required that prior to using any data, you have to make
a static copy in another database. Their argument is

1\. They don't want excel files flying around (which will happen regardless)
and

2\. To perform any analysis, you "obviously" have to have "structured data",
which "obviously" means that you have to extract a csv from the base system
(MongoDB) and put that into a RDBMS (redcap).

Go figure...

~~~
ilaksh
Why is the ethical committee allowed to make technical architectural decisions
if they aren't in the technical team? I would resign.

~~~
danieltillett
Why are the ethics committees allowed to make any decisions?

When I was on one I would by default let everything through unless I came
across something where I thought I could help improve the experimental design.
My colleagues on the other hand would go through them like little emperors and
cause grief for the applicant just because they could.

------
andrewmutz
I disagree with the headline of the article, but I agree with the conclusion
of the article.

AI is in the process of having a huge effect in business software, it's just
not the type of business software most of us think of when we think of
business software.

Many people think of horizontal business software like MS word, excel,
quickbooks, salesforce, etc. Products like this will be hard to automate
significantly with AI, since every company is using these products slightly
differently. The products are intentionally designed to have as wide a TAM as
possible, and so they are general purpose enough to do anything business
related.

There is another very large group of business software that people don't as
readily know about, and that is vertical-specific business software. These
products are not designed to have a wide TAM, but instead to be tailored to
specific industries, and provide a ton of value as a result. These vertical
products are a perfect fit for automation with AI. The author says "Each
business process is a chance for automation", and in these products, these
business processes (and all their inputs and outputs) are structured and
represented in software.

I am building AI systems at a vertical software company right now and am a big
believer in the future of AI in these products. If you have ML expertise and
are interested in working on such systems, feel free to email me your resume.

------
rb808
Most AI guys I know are just interested in playing around in pilot projects
and learning stuff while they train to get a job at Google. It'll only be a
few years before managers figure out there will be very little delivered.

------
valine
The problem goes much deeper than researchers not having enough PDFs. If you
look at where machine learning is successful it’s usually processing spatially
related data. The pixels in a photograph have a spacial component where points
near each other are more related than points far apart. Same goes for audio,
and even text. Words in a paragraph that are close to each other are usually
more related than words far apart.

A spreadsheet has very little or no spacial component for a neural network to
learn, and the location of an important number in a pdf probably has little to
do with the number’s significance. Without a spacial component to do pattern
recognition a lot of the recent advancements in machine learning like
transformers or convolution get thrown out the window.

There are some machine learning problems that can be solved with more or
better data, but I don’t think PDF to JSON is one of them.

~~~
mrjin
IMO machine learning has nothing to do with learning. It might be able to find
out some patterns in the data set which can be very useful to recognize
something like license plates etc. But unfortunately finding out patterns does
not mean understanding them which is the fundamental of learning. Then the
result is quite obvious and simple: you don't know what you don't know and you
cannot transform something you don't understand.

------
MattGaiser
AI is terrible at dealing with unexpected events. Games AI is good at are
relatively deterministic, i.e. all possible outcomes are known. Replicating
art and images is the same way.

If you could script new combat units in a video game on the fly or tweak the
rules slightly, the human would slaughter the AI when an equally skilled human
opponent would not lose so easily or even necessarily lose at all. You can see
this in games like Galactic Civilizations where you can build your own units
and unusual combos confuse the heck out of the computer opponent.

Same with cars. The entire approach is currently based around exposing the AI
to every possible outcome. I remember a seminar on AI Safety where the vehicle
AI had a problem with plastic bags in the air and it would swerve to avoid
them. No human would have an issue with that.

I worked in innovation for a bank looking at automating all these kinds of
things and even spent a few days doing the jobs (and this was eye-opening) and
I was a developer, so not a manager looking at a job spec, but someone who
would have actually done the work. 90% of the job could be automated, but 10%
was dealing with wacky exceptions, many of which they had never specifically
seen before. We had someone who had the job of taking PDFs and extracting
tables of income and expenses. They were generally standardized PDFs, so that
seems like something good to automate right?

Well, no. As tons of the financial advisors had added custom rows which the
person doing the input had to interpret into another column. It was quite eye-
opening that while the jobs were menial data entry, it was nowhere near as
mundane as one might imagine as the guy was still making a judgment call on
whether to classify "farm income" under a person's investment income category
or whether to classify it as regular income for the purpose of investment
advice.

I have a friend currently on a robotic process automation internship with
another bank. Same issue. When the RPA dev people actually go and do these
jobs, they realize that is frequently deviates from the approved job spec with
the people in them making small but significant judgment calls.

It is not a lack of knowledge about what AI can do in either of those cases.
It is not a lack of data as both banks have armies of people doing it and
millions of clients. It is that for AI to do the job, all manner of other
things would need to be standardized and reformed and if that were done, why
use AI to solve the problem in the first place as a lot of it would simply be
computational.

~~~
mywittyname
>You can see this in games like Galactic Civilizations where you can build
your own units and unusual combos confuse the heck out of the computer
opponent.

AIs will do the same to humans when trained against other machines, instead of
being trained on human match data. Since the AIs will try out things most
humans would think are illegal, thus not use them in regular matches. Like
when Chamley-Watson first struck an opponent with his foil from behind his
back.

------
tarsinge
The essay makes a very good point about the availability of documents/data,
but from what I have seen working on ERP projects and business processes I
don't think attacking this problem top-down from how businesses works is a
good idea: a lot of documents produced can be mostly useless artifacts of
human interactions, and most businesses have highly inefficient processes.
Would you train an AI on current HR recruiting practices for example? At its
worst you have businesses with so much sales power (a purely human process)
that their internal processes can be an absolute mess and still go fine. What
an AI trained on these data coud output? Meeting recaps that are never read
from meetings with already questionable value in the first place? Random
reports and strategies meant to praise egos?

A huge part of the business world is service over service over human
interactions that are far removed from the core value production and are side
effect of these very interactions. Sure at their core business must produce
something, but apart from industrial or software processes, the enterprise
world is mostly a giant social game, with success linked to the execution of
sales, marketing and sometimes lobbying, so not much to AI-ify from that
angle.

Edit: just to be clear on the tone it's not meant to be bitter, in fact after
a hard time learning this world it can even be fun.

------
afeller08
AI is useful for business, and it's used in business. Hiring people to do
menial mental tasks that would be particularly easy to automate is cheap.
Hiring programmers and AI developers to automate those tasks is expensive. I
know people who lacked a CS degree and could barely program who transitioned
to work as a programmer by getting hired to do a menial task and writing
mundane old-fashioned code to automate away their job because that was the
only way they could find to transition from menial tasks to a highly skill
occupation.

You don't even need AI to automate a lot of these tasks. Good old fashioned
programming can automate anything truly menial better than AI can, but if
you're going to solve a real problem through code there are only two ways to
do it: 1) write the code yourself, or 2) spend millions of dollars hiring
other people to do it.

Same is true for AI. In contrast, you can very often hire people to solve the
same tasks for minimum wage, or if its a sufficiently digital task, even less
than that, through a service like mechanical chimp.

AI isn't used to automate away menial tasks because the economics of it
doesn't make sense. None of the problems raised by that article are difficult
to overcome, it's just expensive to hire people who solve them well.

This has nothing to do with technology and everything to do with the current
organization of society and its economy.

------
paulus_magnus2
A rather lazy / uninformed article by someone who grossly underestimates the
complexity of what would it would take to completely automate business
processes of a company.

The title question is equivalent to asking: a robot cannot build a car, it's
so useless for manufacturing?

Robots can build cars but we need to arrange them in an "alien dreadnought".

Businesses are not prepared to pay the price of full automation, what they
expect is to put some open source AI run by a fresh graduate and fire all
office clerks the next day.

------
poulsbohemian
When I think about companies I've encountered over the past few years, it
seems to me like the AI problem has been two-fold: 1) They didn't need AI. 2)
They would have been better off listening to the human experts they already
employed.

That is to say - when you look at business case studies of the kinds of
problems that businesses perceive they are going to solve, it's things like
supply-chain "We figured out when it's going to snow, we should have snow
shovels in stock!" Well, of course, and there are a whole lot of humans in
your company that already know this, but they aren't being heard.

A lot of the places where AI has worked out, like spell checkers, various in-
app automations - as the article and people in this thread indicate, are
exactly the kind of problems more companies should probably instead focus
their energy. For example, I think about various gyrations I've watched people
do in order to format their data the way they needed for presentations. Not AI
in the theoretical sense, but definitely time consuming tasks that exist in
every business that would save gobs of time and money if they could be
automated away. But, so long as their isn't clear profit motive good luck
getting your project green-lighted.

------
roenxi
The obvious answer to me is that the hardware is only available to handful of
players and the libraries aren't mature yet. PyTorch has been around for about
4 years; that isn't enough time for a lot of people to have gotten comfortable
with it.

The people who have access to people with software and hardware have found a
lot of uses for the tech - I assume AI basically is Google Image Search.

------
michaelbuckbee
It's sneaking in, just not announcing itself.

I used to work for a really well-known medical dictation/transcription,
documentation, and coding (in the medical billing sense) company.

They're using ML models all over for speech to text, document analysis, etc.

It enables some very real efficiency gains but it's not positioned the same as
something like IBM's Watson and it's somewhat ridiculous AI claims.

~~~
shermanmccoy
I'm surprised to hear this organisation is successfully doing ML speech to
text. Is it running 100% of volume in production? Or is it more of a pilot
type thing? I know of a French multinational bank that just tried for 2 years
to get a ML speech transcription up and running, for transcribing
conversations with customers, but due to unreliable results, recently put the
project on ice. Their experience was much along the lines of everything
discussed in this comment thread.

~~~
michaelbuckbee
I think the expectation of "drop some AI in" and do XYZ job is off in the same
way that "we're going to drop a humanoid robot in" and do XYZ job is off.

Where I've seen it used well is as a piece in a larger system of automation.
In the healthcare case, it's doing a first pass at transcribing an audio
dictation so that a transcriptionist can then start with a 90%+ accurate
document.

This is tough, their role shifts some (more editor/correctionist than true
transcriber) and not everyone makes that transition well, but the end result
is 2x+ efficiency gains.

------
pjc50
Reminds me of the more general paradox:
[https://en.wikipedia.org/wiki/Productivity_paradox](https://en.wikipedia.org/wiki/Productivity_paradox)

As one of the other comments points out, technology can only change
productivity when you change the _process_ , sometimes radically. And that may
mean restructuring the business.

------
sabujp
Because humans aren't very good at doing what they think they do, and people
who have little idea how to do it are going to make a good choice. So, it's a
good choice to have something that works on a large scale, even if it is just
an AI but that doesn't require that you use it for a lot of other things, you
will probably end up with lots of other things just to make the machine go
into something less complicated.

If the AI is so good that it can actually be useful for anything, people will
probably stop giving it advice on how to do what they want from a simple and
simple and simple task; this is the key difference between the AI and humans
at work.

In fact, when I asked about what people really want from a simple task and it
wasn't really a good one, I was very disappointed because it was so complex.
That is why I've come to believe it's really not that useful for anything
other than a little bit more automation.

------
StandardFuture
> No group of researchers can train a "document-understanding" model simply
> because they don't have access to the relevant documents or appropriate
> training labels for them

This is because you could rename deep learning as "over-parameterized
statistics". And statistics is just about building some model of the data.
That is the only thing "training" a model is for: discovering/optimizing a
statistical distribution (a distribution is a generalization of a function).
This means the entirety of deep learning is simply building highly complicated
statistical models of a bunch of data.

It is unlikely that this is equivalent to general intelligence found in
biology.

We could probably solve the AI problem if the _entirety_ of all of research
was not directed at deep learning. And it would also likely be far more
valuable to any organization or individual.

But, that is just the 2 cents of some random HN commenter. So, I will keep
dreaming.

------
nostrademons
AI has plenty of useful applications for business. Fraud detection,
forecasting, enterprise resource planning, logistics - all of these were
considered AI at one point.

The specific problem the author cites - digitizing data in PDFs - shouldn't
even require AI. It can be solved better by just inputting data in digital
form to begin with (like with a web form), and passing it around in a
standardized machine-readable data format. But the commercial real estate
industry is pretty backwards, and can continue to be pretty backwards because
its core competency is _ownership of the real estate_ , and digitization labor
is round-off error compared to the profits generated by it. It'll take a major
recession (and this current coronapocalypse may qualify) to create selection
pressures to weed out inefficient firms, and until that happens there's no
incentive for them to upgrade their processes.

------
aSplash0fDerp
With AI, the production of a result is not the same as an answer/solution.

Instead of a know-it-all, AI comes across as a guess-it-all (narrowing all
results by the criteria programed) and as comments stated, AI is good at
producing 99% maybes.

Looking past the hype, AI has thus far just added to the cacophony of noise
among so called experts. (AI is not an expert)

------
thrower123
Mostly because business isn't really that hard, and we're still struggling to
even define the rules it operates by.

AI alqays looks cool, but isn't very useful in practice. The past ten years
feels like everything is keynote-driven development; get something nifty
looking that demos well and try to shoehorn it into a business case.

------
sadmann1
The hardest things to automate are always the things that are so easy for us
they don't even register consciously

~~~
magicalhippo
While true for some things, it's not for others.

In our application, one key operation is that the user is required to classify
a line item based on the text description. There's a huge code list of
possible classifications, and the user has to pick one that is the most
correct.

This is definitely a task that registers consciously. And, while most of the
time it's fairly easy for trained users, there are often cases which require
extra thought or research.

For example, a T-shirt of mostly cotton vs a T-shirt of mostly synthetic
fibers should be classified differently. How would you know based on the
description "Small Womens V neck Short"?

~~~
ilaksh
It's impossible to know the fabric there. So humans would just guess and often
be wrong. That system needs an "unknown material" code.

~~~
magicalhippo
Right, so the user would have to do further research like contacting their
customer and ask.

Of course, an AI/ML system that could reliably classify a majority, and
reliably classify the rest as "unknown", would be interesting. Not sure how
close we are to that though.

------
jqpabc123
Because at this stage of the game, "artificial intelligence" is still an
oxymoron .

It's really just a database developed through trial and error (aka "training")
that we "hope" contains enough differentiating data points to produce a
reasonable, weighted "best guess".

------
dcolkitt
I forget who first made this argument, but it basically was a response to
critics of philosophy. Critics would challenge the field by asking if
philosophy has made any real contributions to human knowledge. Has it actually
discovered anything that's both non-obvious and conclusively true?

And apologists would that philosophy has made huge, unambiguous contributions.
Only once this happens, those fields tend to be no longer considered
"philosophy". Astronomy, physics, economics, and logic were all sub-domains of
philosophy originally. Once they were formalized, with rigorous, specialized
methods they moved into their own standalone fields. But it was philosophers
who laid the foundation. Consequently when we think of "philosophy", there's a
lot of selection bias, because it's basically the subset of open unsolved
problems that remain.

I think there'a a close analogy here with what we think of as generalized
"business problems". There are many specialized sub-fields like finance,
logistics, marketing, industrial psychology, and accounting. All of those
things used to be thought of as a generic part of business. But eventually
domain-specific methods and technologies led to the point where specialized
practitioners unambiguously out-performed generalist C-suite executives.

Think of techniques like Markowitz portfolio optimization, or five factory
personality testing, or applying Benford's law to profile for accounting
fraud. Those are all examples where something like AI/ML solved what at the
time was a generic business problem. But afterwards it was now just considered
a success of the respective sub-field that those techniques helped create.

The point I'm making is that formal rules-based processes (I won't use AI/ML
here because it's so ambiguous, especially in a historical context) have had a
long history of success in business. We just don't recognize it because we're
begging the question. What we think of as "generalized business issues" is
mainly those open problems that haven't yet succumbed to specialized formal
techniques.

------
sabujp
The author suggests that AI is _something_.

That is, if you think you have a better idea of what you want to do that isn't
the right question. I would say that if you actually get better than an AI
based version of a product then it's actually a useful tool and it can be
improved to be used.

For example, if you say this is AI because a particular feature that is useful
does not exist then you are actually talking about AI because someone is using
it, just that it's useful.

But that statement only has a part of what you mean. That is, if your idea
isn't useless in the sense that it is useful in a specific way, you are simply
asking how you did it.

So to answer the question is: Why is AI useless for business?

------
yamrzou
Many real world business processes assume a certain knowledge of the world and
the relationships between entities, and not just a limited set of data points
about the task at hand. Such kind of knowledge is not yet incorporated into
today's ML systems.

------
gwern
Being in a human-minimum seems to be part of it. AI and software _could_ do
far more than they do, but the problem is that everything around it assumes
human-evolved systems, which destroys the potential for software. So if you
look at just what AI can be wedged into the cracks, you'll conclude it's
largely useless, but then if you can replace whole systems, you get much
larger gains: [https://www.overcomingbias.com/2019/12/automation-as-
coloniz...](https://www.overcomingbias.com/2019/12/automation-as-colonization-
wave.html)

------
indymike
People try to apply AI to high-risk problems that smart people can't solve.
When AI is applied to lower risk probelms that are usually easy for people to
solve, we seem to get great results (i.e. recommendation engines).

~~~
h91wka
Never in my life I encountered a good recommendation engine, let alone a great
one.

~~~
arethuza
I seem to emit some kind of anti-AI field - voice recognition works about 40%
of the time so is basically useless, recommendation algorithms seem to
recommend stuff I have already watched or seem completely random and as for
the mechanisms that select adverts - Youtube seems to be taking the approach
of showing me the same adverts again and again and again (and yes, I do click
thumbs down on them) until I passionately hate the products being advertised
to me (because I watch videos about cars does not mean I want to watch the
_same_ BMW 8-series advert for a few months).

~~~
shermanmccoy
The recommendation algorithms seem to be just the most simplistic type of
pigeon holing. Think Netflix, just because I watched a European political
thriller last week, it is assumed I now want to watch this genre for eternity.

------
synthc
Machine learning requires a large high-quality dataset, which a lot of
companies simply don't have. Building one takes a lot of time and money. The
gains don't outweigh the costs in many cases.

Another problem is that machine learning models are never a 100% correct and
not easily interpretable, so they cannot be used for some critical processes.
Good luck with explaining a customer why his account blocked due to a false
positive made be the AI.

I think there is still a lot of potential for boring symbolic AI, in a lot of
domains you can get results quickly, reliably and if the AI is wrong it's easy
to debug.

~~~
aSplash0fDerp
By the time you read about it, it`s old news.

They have been flooding many of the outlets with counterfeit emotional capital
(social/MSM) and counterfeit intellectual capital (psuedo-
science/education/politics) for quite some time now.

As long as the "I" or the "L" in those fancy acronyms know the difference, the
quality of the data will not be in dispute.

Even without AI/ML, it`s been obvious how bad data integrity is in the most
basic sense of the meaning.

------
resiros
Here is a prediction: This article will not age well. Asking why "AI is so
useless for business" in 2020 is like asking why can't I easily order clothes
from the internet in 1994. The question is simply 5-10 years too early. The AI
startup rush started maybe 2 years ago. Pytorch (the Netscape of ml) has only
been released 3 years ago! It's simply too early to make any judgment.

Let's wait 5 years and see. I predict that all the business processes he
mentioned will be automated (maybe with mechanical Turk oversight). In 10
years, most of the menial desk jobs will not exist.

------
sabujp
The best thing about technology is that it seems to be getting more and more
sophisticated in the industry. It seems like there is some sort of big
disruptive force working at this point, but the big innovation seems to be the
technology that is being used to create and manage the most values.. For
example for a video game I imagine if a team of people working on this game
could create games using some AI (maybe with 3D games) to generate the most
value and then get some of the benefits of those games out there and then get
real value out of it.

------
Grimm1
Huh? Most major companies use a staggering amount of AI that makes them a butt
load of money. -- Oh, the title was practically unrelated to the content of
the article and was just to generate clicks, I see. Well, at least the article
raises a good point about AI being used to solve menial tasks that let people
focus on the larger creative aspects of their work as an assistive tool. That
said, seeing ML push the boundaries of what "menial" (sliding goal post) tasks
it solves is both massively cool and massively value generating.

------
plaidfuji
This article is really focused on the question “why is AI so bad at data
extraction from PDFs when it can beat humans at Go?” and it does answer its
own question toward the end. AI is very good at inverting simulations (chess,
Go) because you can generate an infinite corpus of perfectly labeled data. It
is bad at inverting document creation because there is no exhaustive MS Word
simulator. Soon people will realize that applied AI is really an exercise in
simulation design.

------
perculis
The article fails to address the bigger issue: if enough data is provided and
the information becomes clear, what then is the resultant?

Without a genera A.I. (which we are a long way from) how can the process work
be done and what will it mean if it can be done? We’d like to think that we’d
be more productive... we’d create new and better “things”... but if history is
any guide, we’d simply focus on straightforward profit generating bullshit...

------
metreo
At lot of this has to do with a move away from solid statistical workflows
within an already trend prone field. Computer Science departments have only
widened the cultural gap between their work an that of the Statisticians (you
know the ones you need around to explain what your model is doing). Hiring for
ML/AI sounds better than hiring a bunch of Statisticians which cannot be
expected to deliver product.

------
CaptainActuary
There is an implicit, but, in my opinion, wrong assumption here that AI should
be able to do tasks like extract data from PDFs or convert Excel spreadsheets
into some format. Nothing about these tasks requires intelligence - a fixed
process solves the problem. Asking AI extract data from a PDF is akin to
asking it to develop a process from vague inputs - a far cry from even the
most advanced AI systems today.

------
otabdeveloper4
Because the so-called "AI" is only good for solving classification problems.
Classification problems are great for art, but useless for business.

Business needs to solve the prediction (i.e., regression) problem, which is a
completely different kettle of fish.

P.S. Of course by "AI" I mean the 2020 definition of "multilayered neural
network".

------
StonyRhetoric
This is clickbait (1) to promote his startup, Proda.

ML is used in business workflows all the time - to date, I have built several
solutions that are being used for 53 clients, internal and external.

Here is what makes B2B ML hard: People have to trust it.

This isn't some movie-recommendation engine, which spams you with more bank
heist movies after you watch one. B2C ML systems can get it wrong, and
customers are generally forgiving, because it's a low stakes game. B2B
applications are generally higher-stakes, because they impact business
workflows, and if someone has decided to automate it, it's probably a high-
volume, critical workflow. It has to be extremely accurate, and demonstrably
better than the equivalent human system.

The problem has to be well-defined enough that an ML system can act with high-
accuracy, but not well-defined enough that a rule-system could replace it.
Don't use ML if a rule-system will do a better job. (For those scenarios, you
can still put an ML anomaly-detection system to make sure the rule-system is
still valid, and to guard against data input changes.) As just mentioned, the
problem also has to be important enough and high-volume enough to warrant an
ML solution. The percentage of problems that fulfill these criteria is not
very large.

Now to actual ML development and deployment - the model is the tip of the
iceberg. The rest of the iceberg is data acquisition, feature selection,
data/feature versioning, automated training, CI/CD, model performance
monitoring, et cetera. If ML is being developed inside a software development
organization, this isn't a problem, most people will understand this. If it is
being developed within an embedded BI team inside a business unit - they will
generally not have support/runway needed to build the full system. The ML
model might make it to production, but it will probably run naked, be brittle,
and hard to retrain. A dramatic failure with business impact is just a matter
of time.

There are a lot of low-code, no-code ML solutions that have been developed, or
are being developed, and some of the supporting infrastructure as well, but,
at the risk of sounding parochial/protectionist, you need a rock-solid, end-
to-end, integrated, data management system that is fully understood by
whomever needs to pick up the phone at 2AM. It's the interfaces that are hard,
and chaining together a bunch of third-party black-box systems just means more
interfaces and behavior you don't control. Choose and use these systems
wisely.

So yeah, B2B ML is hard. But it's generally not due to lack of data, and
transfer learning is generally not necessary. Understanding business processes
is important, I agree, but that's comparatively easy. It's what consultants
have been doing for decades. The hard part is choosing a problem where ML can
add value, and then executing on it with enough integrity that people will
actually trust it.

(1) Ok, clickbait might be harsh. But it is self-promotion, and the article
itself is a collection of generic banalities. I feel it falls on the wrong
side of the line.

------
throaway435912
That's easy: the business people don't understand "AI" and the "AI" people
don't understand the business.

Well the business people actually do understand AI, but their understanding of
it is that it is a marketing tool they can use to sell to customers and/or
investors. And in terms of doing that, it AI works very well.

------
sabujp
The reason seems fairly obvious to me. It's the same thing as how we do things
and do them in a good manner - and that's very different from other things in
the industry. For example, why would a doctor call you if he says you are not
doing a diagnosis, but he isn't doing a doctor: you have to be a software
engineer. A doctor could go to you and ask a question about the symptoms you
are experiencing and find out if the doctor is doing a diagnosis. He has to
understand that the doctors use the AI to do everything he can to avoid that
problem. His job seems to be to get insurance and medication coverage.

You might think about this a few times - if you are able to make an AI out of
the box, maybe you have all of the necessary knowledge to get it out as well.
Just as a computer can make a database out of a document. A programmer,
however, could do all the necessary knowledge to get that system working. But
most software engineer is like a carpenter - no amount of math or programming
will change that. They probably also have all of the necessary knowledge
necessary to make a car, so how about a car that can take a picture, and a
car, that can run the calculations of the wheels

------
noxford1
This reminds me of Data Robot laying off people "beacause of covid" right
after finishing a 300 million dollar round. This AI companies have lied about
their valuation for awhile and it's catching up to them.

Ironically enough, after data robot did a layoff they also completed a large
acquisition and hired more executives.

------
xondono
AI has become specially hard to define nowadays that we see companies using
advanced ML techniques for solving issues that were perfectly solvable through
linear regression modeling, just because that’s the path for that sweet
investor money.

------
dancemethis
Because the very concept of "business" is supposed to be boring and not very
intelligent (which doesn't mean it doesn't require knowledge in its field,
it's just not... alive).

~~~
libertine
What?

------
orionblastar
When I did business intellegence programming for a law firm we used statistics
and six sigma to figure things out. It is all about crunching numbers on
spreadsheets or linear algebra.

------
Dotnaught
As per the required SEC disclaimer, past performance is not indicative of
future results. AI is great at spotting patterns that conform to past
performance. Not so much when things change.

------
wmnwmn
Well, regular intelligence isn't but so useful in business either. There are
so many factors in business success, intelligence is just one, and usually not
a very big one.

------
ollo
I am an AI researcher but and I would love to investigate useful systems for
business, but I have no idea about the business processes that this article
mentions.

------
PaulHoule
Just ask a business person to get you a training set and it will be a while
before you hear from them.

------
baybal2
> Why Is Artificial Intelligence So Useless for Business?

Because it doesn't make money? A big enough revelation?

------
nnq
Unpopular opinion: AI will start being useful to business when it will start
being used to re-organize and re-architect core business processes... _Not as
part of existing business processes, the "augmentation" will never offer too
much!_

It will be when AI systems will decide _who to hire_ and _who to fire_ and
_who to promote and demote_ , or what other companies to acquire or to merge
with - profit will be increased, and almost everyone will _hate it!_

It will be when huge fusioned megacorps AI systems will gain monopolies and
replace free markets with centralized planning systems that will actually
outperform markets ("socialist planning" can't work because it can't work with
_humans_ ...bringing in "other" types of intelligences will change the game,
and nobody will call it "socialism" bc it will _not even try to benefit the
people_ this time around - and there will be markets still, just likely HFT-
style ones that will block direct human actors from playing in them) _...and
most will hate it and likely wage war against the societies that will embrace
it this way!_

You'll see AI stops being useless to business, don't worry ...but it will come
with many consequences and side effects, our society as it is _can 't_ handle
it!

~~~
JoeAltmaier
Silly conclusion. AI will also be able to answer the question "How can I
change to further my career?". An answer that human overseers have a terrible
time with.

And if you do what the AI suggests, it will work and you will prosper.

------
blickentwapft
New technologies are overestimated in the short term and underestimated in the
long term.

------
erfgh
Because AI is not really AI as was meant in the 60's. The capabilities of
computers, algorithms and human researchers were vastly overestimated back
then.

But nowadays we find that the AI buzzword sells really well so we decided to
lower and lower the bar until almost any algorithm qualifies as AI (also, any
machine qualifies as a robot).

------
sarthakjain
Not trying to just put in a baseless plug, but most of what you say can be
refuted if you try out our product. Go here: [https://nanonets.com/ocr-
api/](https://nanonets.com/ocr-api/)

------
6gvONxR4sf7o
The entire premise up front is false and probably a primary culprit. Expecting
ML to do things it can't yet by extrapolating from what it can do today (after
reading current capabilities through a filter of marketing hype):

>Today's work in artificial intelligence is amazing. We've taught computers to
beat the most advanced players in the most complex games. We've taught them to
drive cars and create photo-realistic videos and images of people. They can
re-create works of fine-art and emulate the best writers.

Today's work in ML _is_ amazing.

> We've taught computers to beat the most advanced players in the most complex
> games.

Not true. You can spend a zillion dollars on self play to get an AI superhuman
at games simple enough that you can simulate at many many times real life
speed, but we're just now learning to do games like poker, which intuitively
seems less intellectual than Go or Chess, but so does starcraft and that came
after those other games. In ML, placing tasks in order of achievable to
currently impossible can be really unintuitive for lay people.

> We've taught them to drive cars and create photo-realistic videos and images
> of people.

No again. We're getting there with cars, but it turns out that it's really
really hard. Harder than playing superhuman chess! But people who play chess
better than computers can drive cars better than computers. Weird, right?
Again, in ML, placing tasks in order of achievable to currently impossible can
be really unintuitive for lay people.

We _can_ make photorealistic pictures of people, but we're sorta limited (it's
complicated) to faces at high resolutions and just really really really
recently getting them without weird artifacts. But the face is the most
complex part of the body, right? So the rest should be easy!

> They can re-create works of fine-art and emulate the best writers.

This is soooo much of a nope, and you know what I'm going to say anways.

This xkcd is always relevant, even if the bar has moved. Maybe it's even
harder because the bar is moving quickly.
[https://xkcd.com/1425/](https://xkcd.com/1425/)

> In CS, it can be hard to explain the difference between the easy and the
> virtually impossible.

In ML, we're really good at some tasks, so it seems like we should be good at
adjacent tasks, but that's not how it works.

