
The new business of AI and how it’s different from traditional software - gk1
https://a16z.com/2020/02/16/the-new-business-of-ai-and-how-its-different-from-traditional-software/
======
monkeydust
About 6 years ago I invested in a ML as a startup company as an angel. The
proposition was that they had a very nice workflow tool to take data, scrub
it, partion it, build models against it, view performance and deploy it
through and api. At the time there was nothing else out there really that was
as polished and understandable to a business person. Then, very rapidly, the
field got democratized. People were firing up notebooks, tensor flow popped
into existence and their edge eroded. They were pitching to teams who had just
hired data scientists who of course didn't want to buy something that made
their job look easier! The other reality, which ties to this article, is that
whilst they had to do a lot of hand holding with clients so the service cost
became high although this wasn't custom model development but more training on
the platform so they eventually worked out a way to package it in the sale as
they got established. The company in the end changed course. They built out
the product to focus on a single vertical that was getting traction in
finance. This was huge contentious change and they lost some people along the
way - but - it ultimately saved them from being wiped out. In the end the sold
out to a much larger player in that vertical who was had no AI strategy and we
all made a decent if not mind blowing return on our investment. Lots of
lessons learnt on the way and it was a fun ride overall.

~~~
thaumasiotes
> They were pitching to teams who had just hired data scientists who of course
> didn't want to buy something that made their job look easier!

I don't really follow. Nobody thinks a guy driving a forklift is working just
as hard as a guy lifting crates by hand. But he's doing more lifting! Why
wouldn't you want your job to look easier? The easier it is, the more you can
do.

~~~
mistrial9
Imagine for a minute that you are new on Earth, first day, and you are in a
car as a passenger on a highway. You watch the driver look, hold the wheel,
use the brake and drive.. It seems very important what the driver is doing, so
you notice many small details. However, you have no idea how much the car
weighs ! or on a hill that force changes ! You watch the driver so carefully
but have no idea about the basic weight of the car..

Now, you are a business person in a room with your thoughts. A presenter talks
and you listen, but you watch the posture, the haircut, the order of speech ..
very carefully But you have no idea about how the data moves, or the learning
curve to use the tools, or even more how to innovate against "ordinary" .. you
have no idea ! How could you.. so you watch the presenter carefully..

Basically, everyone has an idea of lifting a box.. so no matter how detached
your life in the office is, you can appreciate a fork-lift. However, you have
no idea about thirty years of *nix development, the toolkit evolution, the
language wars.. etc

Here is the hard part -- many small-minded people (who run money) think
NOTHING of learning.. its not important.. they care about control of the
situation, and who gets the profit. Some leaders actually cultivate a smug
disdain for "workers" .. some of those leaders have money.. etc...

Know this well - it is hard to believe how true it is, but perhaps you will
find out over the years.

~~~
thwarted
That's another perspective on bike shedding.

[http://bikeshed.com/](http://bikeshed.com/)

------
mbesto
Ya know what, AI might be the most handy-wavy term ever adopted in the tech
world (and this is compared to big data, IoT, blockchain, etc.)

No one really can really define what precisely AI is. Just in autonomous
driving alone, there are 6 different "levels" of AI. If I write a decision
tree algorithm, is that AI?

My point is, you can basically use the term AI to justify virtually anything
when it comes to the value of software, which just makes articles like this
not valuable at all. Why not just call it "this is what software is capable of
doing for a business"?

One thing is for sure about "AI", and really just "advancements in software",
is that repeatable human tasks are and will be replaced by automated systems.
The only thing left for non-software business interaction will be to simply
deal with other humans beings.

~~~
visarga
If you take a look at the papers in this field[1] you will see the almost
never mention the words 'artificial intelligence'. It's 99% in the press,
peddled by journalists trying to exploit every angle of sensationalism and
fear.

[1] [http://www.arxiv-sanity.com/](http://www.arxiv-sanity.com/)

~~~
sacado2
It's because "artificial intelligence" is a very broad expression and doesn't
say a lot when you're in the domain. You won't find the expression "computer
science" a lot either, it doesn't mean "computer science" is a sensationalist
expression only used by journalists.

------
codingslave
This is a really good article. One company that I always think of when I think
machine learning is the computer vision "startup" Clarifai. At one point in
time they were cutting edge, filling the need for large scale image
classification that enterprises had. This was when computer vision neural
network architectures were rudimentary and hard to train (they still kind of
are). Then in-house data science teams sprang up, tooling got better, better
network architectures came out, and Clarifai essentially lost all of their
edge over night. Machine Learning in itself is basically never the edge, it
has to be a unique data set or sticky user base, something else that builds a
moat.

~~~
m_ke
I was one of the first few employees at Clarifai so I can add to this.

When Matt and Adam started the company there was no Tensorflow and outside of
Hinton/Bengio/LeCun triangle nobody was doing deep learning yet. Matt just
beat Google on ImageNet (around the time when he was doing an internship at
Google Brain under Jeff Dean) and was one of the few experts in the field as
he was lucky enough to have Hinton as his masters thesis advisor at UofT and
did a PhD at NYU under Rob Fergus and Yann LeCun.

We had a clear technological advantage for about 2 years but thanks to the
open nature of deep learning research (arxiv and willingness to open source
code) the whole field caught up. Google giving out tensorflow and pretrained
models for free made a bit of a dent as well.

A big problem with the "machine learning model as a service" business model is
that each customer has slightly different needs and a different source of data
so an off the shelf solution is usually not good enough. Because of that you
end up being forced into doing consulting for large companies that don't know
how to hire machine learning engineers. You end up spending months building
them a model that they won't even know how to use and move on to the next
customer.

IMHO there are only two viable AI business models right now:

1\. Spin out your research lab into a company and keep churning out papers
without ever thinking about having real customers and make enough noise to get
acquired by FAANG, ala Deepmind, Metamind, etc.

2\. Find a problem with a lot of repetitive manual labor and slowly wedge a
machine learning model into the process. Design a good feedback loop by first
augmenting the workers and use their feedback to keep improving the model
until it's good enough to replace them. Doing so requires you to actually
build a real product in that domain so you'll need much more diverse team than
a paper mill. Most companies doing this shouldn't even call themselves "AI"
companies since their customers don't care how the solution works as long as
it solves their problem.

I believe that companies pursuing option 2 will take over a lot of large
established players because building machine learning driven products requires
buy in from the whole organization and it's not something that's easy to do at
large enterprises. You need to be able to design the product in a way that
helps you collect feedback, build data infrastructure to collect it and be
willing to accept a solution that won't always be right.

~~~
allovernow
Awesome to hear from somebody close to the source. There's a third option.
There are entire industries where automation has not been practical because of
the necessity for many hardcoded rules - problems outside of bland image
recognition, including those solved by novel architectures like GANs and
encoders and such. ML has finally gotten to the point where complex heuristics
can be learned to automate those tasks which were impractical previously. Now
you can legitimately write and sell ML services which meaningfully analyze,
catalogue, and search data in ways that are currently human intensive, but
just unique enough that regular programming won't cut it. There will be a
proliferation of such businesses in the near future. The first wave is in
development now. I've said it repeatedly, and I'll say it again, we are on the
verge of an internet-like change in society. ML is poised to take human
endeavors to new heights in the next decade...and if innovation and hardware
continue to progress, I think the recent proliferation of the ML zoo and
associated theory has given us the foundational tools for true AI which we may
see in our [distant] lifetimes.

~~~
wensheng
I am not sure what the difference between this and the "option 2" in the post
you replied to. Some examples would be helpful. It seems to me both options
replace human intensive tasks.

------
AndrewKemendo
"AI companies simply don’t have the same economic construction as software
businesses"

Last I checked, "AI" is software.

Nowhere in here did I read anything about the fundamental truth of modern AI
in production:

[AI] is a feature inside a successful product.

There are no "AI" companies, or "AI" products. There are companies which
provide services to do inference on data, or in some cases tools/platforms so
you can do it on your own.

They also confuse me by using the term services to mean bespoke Non Recurring
Engineering efforts, instead of including something like a REST based API
service that reflects most "pure" AI companies. Amazon Rekognition or MS
Cognitive Services API are perfect examples of SaaS like AI services, but they
aren't products exactly because they are used inside some other product as a
feature.

At the end of the day, if you look at successful uses of AI in products they
are one tiny piece of a larger product, that helps the product scale where it
couldn't have before. That's pretty much the only place where it is proving
very successful.

Even then, more and more of that is being done in house with the rise of
things like Sagemaker and other turnkey ML inference tools.

~~~
calebkaiser
I agree that this is a common misconception people have about ML and how it
fits into our world. If you want to see successful ML products, you don't have
to find some AGI stealth mode startup—just look at your phone:

\- Gmail's Smart Compose \- Netflix's Recommendation Engine \- Uber's ETA
Prediction

ML functionality is becoming a standard feature in software, and in my
opinion, that's the real "ML Revolution." As you mention, turnkey inference
tools like Cortex (full disclosure/shameless plug: I'm a contributor) are
making this accessible to virtually all eng teams.

~~~
ec109685
The AI companies the article is referencing are trying to provide those
examples: smart compose, Netflix recommendation engines, etc. to customer
companies without an army of PhD’s. However the article makes it clear that
doing that in a general way is hard, much harder than a normal SaaS business.

~~~
AndrewKemendo
It's a difference without distinction.

The services that ML based SaaS are offering are currently harder than the
majority of other services.

However you could have said the same about any number of services/products
over the years that were in the early adopter curve, which I will state
confidently that we're still in for ML.

At the end of the day it's a hard SaaS just like VoIP and search and
geoservices once were.

~~~
ec109685
Yeah, the article doesn't say don't do a ML SaaS. Just that it won't have 80%
margins like a typical SaaS business and you have to specialize to give you
any sort of scale or it's just a Service business.

------
wildermuthn
I like the hard-nosed business evaluation of this article. However, the
difficulties of a successful “AI” company are the same difficulties a “web”
company had in the 90s. How many “World Wide Web” (back when we used that
phrase) products survived the bubble? And how many people made a lot of money
while creating software of very little value? People understood, correctly,
that the “Net” (sorry, can’t help my nostalgia) was a breakthrough technology
that would change everything. But very few people understood how to use it
effectively in that moment to provide business or consumer value.

The same is true today for deep learning models. The tech has advanced to the
point where there are clearly products waiting to be built upon advances that
are largely open-sourced (if not pretrained). But building a product that is
useful hasn’t changed from being a very difficult task to easy simply because
a new technology exists. Moreover, as the article points out, costs in terms
of dollars and manpower are currently higher than a making another web or
mobile app.

All this should make hackers salivate. The higher than challenge and the more
advanced the stack, the greater an opportunity becomes.

A useful way of thinking about this moment is that we have another platform to
launch products from: first desktop, then web, then mobile, now “AI”. Each
platform enabled a new kind of product. Unlike the other platforms, which were
essentially hardware mechanisms of software delivery, ML/DL is a software
mechanism of delivering software. It is a fundamentally different way of
creating software that produces fundamentally different kinds of software —
the kind you couldn’t make any other way.

The article is right to throw the cold-water of reality on the hype of AI. But
the backdrop is one of breakthrough technologies that are just waiting to be
leveraged.

~~~
tkgally
> the difficulties of a successful “AI” company are the same difficulties a
> “web” company had in the 90s.

I had a similar thought reading the comments here about AI hype. In the mid to
late nineties, there was a lot of hype about the Internet, and a lot of smart
people dismissed the Internet as nothing more than hype. In the end, though,
the Internet changed our world. I suspect that, eventually, AI will do the
same.

------
allovernow
It is increasingly clear to me that vast majority of programmers do not
understand the profound difference between modern Machine Learning and
software development.

AI may just be a buzzword, but neural networks and ML are not. The tech has
just arrived, this is fresh, evergreen territory, and the first wave of
applied machine learning is quickly approaching, in quiet development across
industries. Things are about to get _really_ interesting with the academic
explosion of neural network architectures and theory, and I would strongly
recommend to any developer to spend a few months getting up to speed!

~~~
raducu
> I would strongly recommend to any developer to spend a few months getting up
> to speed!

As someone who spent more than a few months on a couple courses on various
platforms, a few months is NOTHING in this field.

ML is not something where most people out of bootcamps can make a living out
of.

ML is the ultimate winner-take all technology.

Sure, there might be some fast.ai examples of self taught ML scientists, but
it is orders of magnitude harder than software development, mainly because of
much longer learning feed-back -- you can't just printf or debug your code and
expect to learn something new every 5 minutes.

~~~
zelly
> ML is the ultimate winner-take all technology.

Hence the flood of ML grads which will only get bigger and bigger as time goes
on.

As a general observation of business, returns per unit of labor can range from
completely linear (factories, hospitals) to more nonlinear in things like
software or art. ML researcher is on the far end of this. Yes, Bengio and
Goodfellow get paid 7 figures, but that's because even if you pay 100 amateurs
1/100th of that, they will still produce worse work.

If you want to be a good ML researcher, the time to get in was 10 years ago.

~~~
ThePhysicist
For broad research yes, I think there are still so many niches though where if
you can combine domain expertise with good knowledge of ML you can achieve
great results, simply because there will not be many people with this skill
combination.

------
seibelj
I wrote an article I published today about how AI is the biggest misnomer in
tech history [https://medium.com/@seibelj/the-artificial-intelligence-
scam...](https://medium.com/@seibelj/the-artificial-intelligence-scam-is-
imploding-34b156c3537e)

I wrote it to be tongue-in-cheek in a ranting style, but essentially "AI"
businesses and the technology underpinning it are not the silver bullet the
media and marketing hype has made it out to be. The linked a16z article shows
how AI is the same story everywhere - enormous capital to get the data and
engineers to automate, but even the "good" AI still gets it wrong much of the
time, necessitating endless edge-cases, human intervention, and eventually
it's a giant ball of poorly-understand and impossible to maintain pipelines
that don't even provide a better result than a few humans with a spreadsheet.

~~~
blasphemous
Well said, the tech is not ready for what people are trying to use it for (for
the most part). The problems people are trying to solve is still best served
by clever algorithms + data.

------
denisvlr
This article really resonates with me: I'm currently working on bootstrapping
an applied AI service and faced most of the challenges mentioned, to the point
where I was doubting myself as I was not as efficient or productive as other
Saas / Software founders. So this article is kind of reassuring to me.

A few more comments:

Cloud infra For a traditional web app you can quickly deploy it on a cheap AWS
on Heroku VM for a few dollars/mo and later scale as you get more traffic.
With AI you now need expensive training VMs. There are free options such as
Google Collab but it doesn't scale for anything else than toy projects or
prototyping. AWS entry point GPU instance (p2.xlarge) is at $0.900/hour i.e.
$648/mo, and a more performant one (p3.2xlarge) at $2160/mo. Yes, you should
shut them down when you are done with training but still. You can also use
spot instances to reduce cost but it's not straightforward to set up.

For inference, you also need a VM with enough memory for your model to fit in,
so again an expensive VM from day one.

Datasets if you rely on a publicly available dataset, chances are there are
already 10 startups doing the same product. In order to have a somewhat unique
and differentiated product, you need a way to acquire and label a private
dataset.

Humans in the loop The labeling part is very tedious and costly both in terms
of time and money. You can hire experts or do it yourself at great cost, or
you can hire cheap outsourced labor who will deliver low-quality annotations
that you will spend a lot of time controlling, filtering, sending back, etc.

For inference, depending on your domain, even with state-of-the-art
performance you may end up with say 90% accuracy, ie 1 angry customer out of
10. that's probably not acceptable, but it gets worse: chances are you will
attract early-adopter customers who are faced with hard cases, whose current
solution doesn't work so that's why they want to use your fancy AI in the
first place. In that context, your accuracy for this kind of customer might
actually be much worse. So again you need significant human resources to
control inferences in production. It will be hard to offer real-time results,
so you may have to design your product to be async and introduce some delay in
responses, which is maybe not the UX you initially had in mind.

I still think there are tremendous opportunities in applied AI products and
services, but it's important to have these challenges in mind when planning a
new product or startup.

~~~
redisman
How does it compare with building your own GPU-heavy computers? I'm not too
familiar with it or how consumer-grade GPUs fare in these workloads, but
training sounds like it in theory could happen locally easier than any
consumer facing parts.

------
deepGem
My greatest takeaway from fiddling around with machine learning for a past few
years is this:

Most of the business use cases are deterministic in nature. It is really hard
to shoe in a probabilistic response in such use cases, as illustrated in this
article. It's even harder to convert this probabilistic response to a
deterministic response (humans in the loop), at a cost comparable to just
maintaining the deterministic system.

There are a few business cases that naturally are probabilistic - ad bidding,
stock price prediction, recommendations. The giants have taken over this
already.

I am still scratching my head to find a use case for a startup to jump in, use
machine learning and solve a growing business use case. Probably there are
some healthcare related use cases but I have no idea about that domain.

~~~
calebkaiser
To me, the more interesting question with ML isn't "How can I build an entire
business around a model," but rather "How can I exponentially increase a
product's value with machine learning?"

For example, I don't theoretically need Netflix/Spotify/YouTube to have a
great recommendation system. All I _need_ them to do is stream media to my
device. However, their recommendation systems add immense value to their
product for me.

A similar example would be navigation services (Uber/Google Map's ETA
prediction, for example). I don't necessarily need those apps to give me
predictions about traffic patterns—their core functionality is just giving me
directions—but it makes the product much more valuable to me.

For those businesses, this increase in value leads to an increase in usage and
therefore revenue—it's not simply an abstract "nice to have."

I think oftentimes the argument about the value of ML comes down to a binary
decision of whether or not ML can 100% replace humans in a given domain, and
that this is a flawed model. ML is capable of improving most products, in my
opinion, and we're already seeing it happen.

~~~
TheColorYellow
I would take it a step further. What you're talking about is incremental
improvements to traditional services or products.

What the poster above you is getting at, whether they realize it or not, is
that they are trying to identify markets where ML could exist solely as the
service or product.

I don't think the latter is ever going to be possible for ML much like that
same comment alludes too.

Maybe makes ML a bit less sexy, but also is probably true of any innovative
technology.

------
tehsauce
I've revised the naivety out of their 3 points at the beginning of the article

\--------

In particular, many AI companies have:

1\. Lower gross margins due to heavy cloud infrastructure usage and ongoing
human support ---> Their product is actually an army of expensive humans

2\. Scaling challenges due to the thorny problem of edge cases ---> Their
products don't actually work

3\. Weaker defensive moats due to the commoditization of AI models and
challenges with data network effects ---> No truely valuable technology or
systems

~~~
ssivark
It’s important to keep in mind that most of the AI “innovation”/hype is coming
from a handful of companies looking to make money from cloud computing or
whose moat is built on data. The good old story of making money in a gold rush
by selling tools...

I would venture that the most hyped stuff (deep learning) doesn’t really work
very well in practice, and the stuff that works reliably well is boring enough
to barely register on the hype train as “AI”. To be fair, that’s probably an
exaggerated caricature, but that is my reading between the lines of SOTA AI
results.

Very few people (even among ML researchers) appreciate the fundamental
limitations of associational reasoning (rather than causal reasoning). It’s
going to be an interesting time when this message actually sinks in...

~~~
2sk21
I really like your point of this being "associational learning". There is a
lot more to intelligence than just learning associations

------
pcmaffey
As they say, ML is written in python. AI is written in power point.

~~~
philips4350
I mean you could write AI in power point ...
[https://www.youtube.com/watch?v=uNjxe8ShM-8](https://www.youtube.com/watch?v=uNjxe8ShM-8)

/s

------
smoyer
My favorite way of describing how AI is different:

    
    
      Traditional software: input + program = output
      AI:  input + output = program

------
perlgeek
I don't think AI is a business.

There are lots of business workflows that AI can improve or enable in the
first place, and you might be able to make a business out of _that_ , but you
should be aware of the difference.

If you want to be in the business of enabling / improving business workflows,
you should be very explicit about that, and AI could be one of many techniques
in your tool belt -- but hopefully not the only one.

Starting an "AI business" sounds like "I've got a shiny new hammer, what nails
can I hit with it?", which is kinda backwards from my understanding of how
businesses should be run. It might work, but it's also easy to fall into the
trap thinking AI should be your only tool.

------
pgcj_poster
I think the main difference between AI and traditional software is that the
latter is actually useful in the real world.

The place where AI has had the most impact is image recognition—and that's
cool, but that's a very small part of what we use computers for. As someone
who spends almost every waking hour in front a computer, I don't think I use
any software that employs AI for any useful purpose except maybe search
engines—and I'm pretty sure that they were a lot better before they used AI.
GPT-2 is neat, but there's literally no circumstance under which I would want
to use GPT-2-generated text for anything except experiments or toys. And while
I'm sure Google makes good use of AI for spying on people and recommending
YouTube videos to turn my brother into a nazi, I'd much prefer if they didn't
do that.

~~~
thrav
I worked at an AI startup that pioneered modeling over inbound leads and Sales
outcomes (most of this data is in Salesforce + Marketo/Pardot/Hubspot) and
that application was hugely valuable for a few companies with massive inbound
volume. Zendesk and New Relic were the poster child success stories, since
they didn’t have to massively scale their sales development teams to call on
every mom and pop / student exploring a trial. It highlighted the 5-10% of
their volume that could actually yield real money very effectively.

Unfortunately, there’s not that many companies with that problem, and the
market never materialized to the necessary degree and they sold the existing
contracts and pivoted. That said, it was a home run for what it did.

If you find the right problem, AI can create massive value. The hard part is
never the learning, it’s finding and framing the right problem.

------
sjg007
Interesting take. Someone who comes up with a good NLP AI though will probably
be closer to 80% margins because you’ve got a shared model. Image processing
too is usually built on top of pre-trained models and work surprisingly well.
This is known as transfer learning. So I think we will see more online AI
models.

~~~
andrewmutz
I agree that cloud operating costs are lower for NLP AI, but for a different
reason than you cite. I think they are lower because the type of data that
they operate on is much more compact than AI that is operating on images,
video, or audio.

------
shadowtree
"AI" products are consulting heavy as 80% of the effort is spent on data. Real
world data inputs are messy, most need human curation up front.

Simple example: FMCG/Retail would like to use apps to scan shelves for their
own product placement and their competitors. Simple, right? But getting those
packshots, account for lighting and placement conditions, ... you need an army
of people to do QC.

AI really just moves the analytics/BI load from a company to a vendor - hence
it is consulting. Good business to have, but human labor always has shittier
margins than software.

------
natmaka
Isn't 99.99999...% of any currently used AI good ol' statistics or (not XOR)
classical operations research stuff or (not XOR) well-known and rather simple
NLP?

~~~
jl2718
The more specific the use case, the more that human-engineered tools/rules
will dominate. E.g. Regex beats BERT 99% of the time for any specific use case
of value to a real customer, but BERT does a decent job in open domains
without much effort.

~~~
natmaka
Where is such 'decent job' part of the wonders touted as "AI", which widely
successful product/service sold as AI-powered benefits from it?

------
KKKKkkkk1
Over the decades that computers kept getting smaller, there were lots of new
businesses, services, and products that were created thanks to this process.

Now that we're in a phase where we're making our DNNs deeper and deeper, what
are some new successful businesses, services, and products that are springing
up as a consequence? I can think of Amazon Echo. Any others?

~~~
tommit
How about the ability to search for objects in your pictures on your phone
(granted, this is not a business in itself, but rather a new feature on
existing products), use an online translator to translate almost anything in
any language (including idioms) using something like google translate or
deepL, fun niche products that give us a glimpse into what could be possible
in the not so far future like AI Dungeon ... and that's excluding SaaS
businesses that sell solutions to other businesses directly, so as a consumer
you wouldn't really hear about it. I've worked at two of those companies in
the past.

Machine Learning techniques have been applied to numerous different fields.
Right now, they're often more subsidizing than a product in itself. But they
are out there. And while I agree that a lot of companies are just throwing out
buzzwords, it's definitely not all snake oil.

~~~
pingyong
>ability to search for objects in your pictures on your phone

This is pretty cool, can save a lot of time in some cases, and doesn't
disappoint because object recognition (even by humans) is unreliable by
nature.

>use an online translator to translate almost anything in any language
(including idioms) using something like google translate

That's debatable at best. Google Translate at least is still horrible, to the
point where it is actually pretty much completely useless, at translating
Asian languages to English. For longer texts, even German to English or
Spanish to English produces a lot of nonsense, although it is usually at least
somewhat intelligible if you're familiar with the sort of mistakes that Google
Translate tends to make.

Anecdotally, I don't use Google Translate any differently than I did 7 years
ago. It's good for single words or expressions, it's okay for single sentences
if you are aware of it's pitfalls, it's pretty useless for large articles
unless all you want is getting a general idea of what the article is talking
about.

>AI Dungeon

I mean everything we've seen from AI Dungeon was hilarious because of it's
nonsensical flaws. If those flaws can somehow be removed in the future we'll
be stuck with a version where every individual sentence kind of makes sense
but there is no coherent connection or story line, i.e. a product that nobody
would use.

~~~
tommit
You definitely have good points, but since the post I responded to gave Alexa
as an initial example, I kind of took that as baseline.

I can't really argue with anecdotes, but I can ensure you that a whole lot has
changed in Machine Translation in the past 7 years, starting with Google
introducing Neural Machine Translation[1] to replace their statistical model,
which would often behave in the way you described. That's why I specifically
included idioms, which hadn't really been possible until then. It's not yet
perfect, but it's crazy good at what it does and only getting better.

I know AI Dungeon was really wonky, but I also said it gives us a glimpse of
what may be possible. Because the way it interprets natural language is really
something else (granted, that's the underlying model, of course). It's really
a product still entirely in its infancy. And I don't think AI Dungeon will
take the world by storm much more than it has done thus far, but I could
imagine countless applications for a similar but improved technology.

I don't know, OP was asking for products and services, and I gave some
examples. Are they flawless? No. Is most technology flawless? No. Will there
be growing market for somewhat imperfect AI-based applications? I surely think
so. In the end, humans aren't flawless either.

[1]
[https://en.wikipedia.org/wiki/Google_Neural_Machine_Translat...](https://en.wikipedia.org/wiki/Google_Neural_Machine_Translation),
[https://arxiv.org/pdf/1609.08144.pdf](https://arxiv.org/pdf/1609.08144.pdf)

------
specialist
I worked on the recommender and personalization stuff for a midsized fashion
retailer. It took me about 2 years to figure out most of the work done by our
ML & big data teammates was pure fiction.

I believe, but have no conceivable way to prove, that all of the easy wins
have been won by the first movers.

It's weird. I have almost irrational optimism about the unbounded potential
for deep learning, ML, big data, and so forth. Amazing, almost magical, stuff
like KP figuring out that Vioxx was killing their patients once they had
enough data and asked the right questions.

But I'm also completely skeptical about using these techniques for culture and
entertainment and advertising. Methinks that ceiling has already been hit and
there's nothing on the horizon.

Too bad. I really wanted our internal version of Stitchfix to actually work.
Now I'm not sure it's possible. Or that we (or maybe just me) were asking the
wrong questions.

------
thulecitizen
Are AI startups just a business model built around some proprietary
code/software?

------
willart4food
Great article, so... the opportunities for innovators and investors are:

\- new OS paradigm: skunkworks in a garage somewhere in the world

\- new hardware paradigm: skunkworks in a garage somewhere in the world

\- new AI business: quietly disrupting a niche of a niche

------
ByronFortescue
Just a side note, but please; on-premises, not on-premise.

~~~
natch
Just a side note, but please, use a comma to separate two dependent and
incomplete clauses, not a semicolon or colon.

------
fa7pdn
data is the new oil!

------
chevman
My prediction is that the first, really huge, AI breakout company will be the
company that can figure out how to deploy AI to automate and scale data
normalization/processing/integration pipelines and the resultant downstream
features that drive value.

~~~
adtac
Is this satire? Because I can't tell anymore

~~~
jmnicolas
I expected to read the keyword 'webscale' at some point but was left
disappointed ...

------
streetcat1
Ways to solve some of the issues mentioned:

1) Using containers, the SASS software can be downloaded to the client cluster
and managed by Kubernetes operators. Hence the cost of training and storage
will be bare by the clients themself and not by the SASS company.

2) Second, the use of AutoML should increase the productivity of the startup
employees (especially with the ongoing retraining of models, deployment,
monitoring, etc).

The one problem that will always be there is new data and edge cases in the
data. I do believe that this would be the major obstacle for the next 5 years.

Also, I would expect to see the number of models actually explode (assuming
that they are trained and deployed by AutoML). Case in point is Uber with
models per city/ time of day (with 1000's of models in production).

~~~
ganoushoreilly
This doesn't really work as most AI Modeling is very resource intense (GPU /
TPU etc.) that most business aren't going to have.

~~~
andreilys
This can be circumvented through the use of transfer learning and super
convergence.

You can now train a State of the art image classifier with nothing more than a
google colab notebook (free to use).

That’s for deep learning. For classical ML (I.e. XGboost) it really doesn’t
take much time or compute to create a model that has business value

