>a piece of software which uses databases of expert knowledge to offer advice or make decisions.
The methods of inference have improved from predicate logic to statistics and "machine learning" now that computers have gotten much faster.
(I'm bootstrapping an expert system myself)
I really think striving to find specific problems where AI can add value is critical to making money in the space, and I agree with this article strongly.
Universal human abilities like speech and face recognition have made some low-skill jobs harder to automate than expert ones. We've only just invented algorithms that solve those problems adequately. If your vertical ML application is a machine that picks fruit when it's ripe, it's a stretch to call it an expert system.
How does the machine recognize fruit? It must have some training data set to classify what is fruit and what is not, and which are ripe. Humans may be "experts" at simple stuff like this. One cannot easily generalize it to a machine that also harvests potatoes, without providing "expert" knowledge about potatoes.
Heck, I used PHP to interface with CLIPS  to provide a simple suggestion engine for a college project back in 2002. It had a simple wizard flow where it would ask a few questions, shell out to CLIPS for the next series of questions, do that for a bit, and return a suggested product.
1 - https://www.drools.org
The video contains bits that aren't fully reflected in the post itself, so it's worthwhile to watch the whole thing. The most important part IMHO is that you absolutely have to start with a known problem and work out the solution from there, not vice versa. This is a trivial insight for most startups, but a surprisingly common trap for even the smartest AI/ML people.
It's very easy to get blinded by the sheer awesomeness of the data/model you stumbled upon, all while ignoring that you haven't fully grasped the problem space yet.
Like the article and some of the comments here suggest, it took years of domain expertise (most of the team comes out of the Tea Institute at Penn State, a research Institute for tea and tea tasting), followed by years of R&D to collect the proprietary data-sets and develop the models. And then it took a year or so to build a product around the AI's predictive capabilities - this isn't the shortest or easiest path, but we're still going strong!
I think companies like this are hard to build, hard to fund, and hard to compete with.
Where I disagree with other comments is on the competitive side; we've developed a few of our own algorithms (not generic or even "played with some options" neural nets / deep learning) trained on specialized and proprietary data set from years of work and collection - now that we've dug our moat, I don't think anyone will be competitive with out specialized AI for modeling human sensory perception and predicting preferences of food and beverage products anytime soon!
Industry specific customized solutions would require SMEs to build solutions. At this point, you are essentially buying people skills and not products. You'd also see a large increase in price points here which companies may not be willing to pay. Everyone wants something quick and cheap. AI is not a panacea or a magic wand. The hype is more detrimental to the progress of AI. Once the disillusionment sets in - people will blame the technology rather than people who took decisions to use AI in the wrong contexts.
I've always been a bit confused about what the end goal will look like for 'general AI'.
Or is there potential for some general platform that these vertical AI systems can plug into, similar to the app store, but with some type of cross vertical communication so they be layered on top of each other.
He is trying to show that AI that solves business problems is more valuable than AI that scores 1% higher on some benchmark.
The tech value proposition is in running these algorithms, often tensorflow at scale cheaply in production.
Companies like Google/Facebook/Palantir often have access to very similar supposedly hard to get to datasets, plus a lot more engineering expertise to running these systems at scale.
Why can't they start playing whack-a-mole pumping out vertical products presenting a serious threat to these smaller startups. Maybe it's not worth it for them but there is a fair bit of cash there ?
For example Deepmind with healthcare, and the google jobs API ?
IMO, it would be pretty mad to start a company today around general-purpose conversational AI or general purpose photo recognition service. Applying the same technologies for smaller, more specific groups of users is much less risky path.
I highly doubt Google/Baidu will get into business of recognizing manufacturing defects in fidget spinners or analyzing sensory data to predict when a punching press is about to fail.
They will build platforms for:
- IoT - single robotic KIT/SDK, which allows you to easily install sensors and integrate data into rest of the platform
- Cloud ML - image recognition/models optimization - will allow to train model and detect defects from previous step by two mouse clicks
and take a lot of added value from small companies.
Only a small group of tech startups need to care about 'defebsibility'. If they didn't then patents would actually be useful in the software world.
The initial software built that addresses the problem is always easy to reproduce for nearly all tech startups. It's all of the supporting systems, marketing, lessons learned from putting it into production, etc, etc that make any company difficult to duplicate.
I wish building startups was only about the technology and defending it.
They're not quite as big as you think they are.
Also sometimes there's some serious... hinderances... for big companies to execute on certain ideas. Disagreements on how to execute or on what to execute, for example.
However, incorporating mature ML methodologies (meaning it has a library) to subject-matter problems is now adding a tremendous amount of value in the research I do.
- Is fundamentally tied to data that is proprietary and difficult to gather.
- Is intrinsically extremely complex to build and maintain.
- Can only be built by a diverse team which is difficult to gather.
Then... well, yes, I guess you could say that those of good metric for determining if a product can easily be replicated.
...but that's not the same thing as it being useful or profitable.
It just means that the team has a bit more time to try to figure out those other two important things before someone else comes along and copies what they've done.
> My claim is that Vertical AI startups are inherently defensible.
Putting 'vertical' in front of 'AI' doesn't magically make things better than just 'AI'.
The problem with these products is you can't take a trivial 'proof of concept' or MVP, pitch it and then roll it out 'into production'.
This isn't some 'smoke and mirrors' jazzy demo of a website & app combo you can go away and implement properly later... the proof of concept you build may not scale. Like... it may actually not be possible to scale. Maybe it takes too much compute to train; maybe it takes data you don't have; maybe it turns out your data isn't suitable.
You think building a business and getting users and sorting your workers and so on isn't hard enough?
Try adding a product that may or may not actually ever work into the mix. Sound scary yet? It should sound scary. That's the sound of money draining out of a hole in the floor.
...and sure, you argue, these models do work and they are good; but there's the catch right there... :)
...if you use a model that does work and isn't risky and does use available data... then you lose all the points at the top that made it an interesting business to invest in.
I think the author makes excellent points.
It's hard to build a general AI business. E.g. a computer vision API provider. The technology is so democratized that you can't compete on algorithms alone. These APIs/services are more or less commodities nowadays.
Compare that to credit card fraud detection. For many years, there was one company/product (HNC/FICO/Falcon) that dominated the market (and largely still does) because they had a monopoly on the data. They smartly created a consortium and only they have the rights to train models on the data. They still use a relatively simple feedforward neural network with a ton of hand-tuned features. This is an example of vertical AI that created a wildly successful company.
My point is, so what? That's not a good indicator of anything meaningful.
The values that make them 'defensible' only make them difficult to replicate, that might look good on paper or in
a pitch ('no one else can do this...') but they're not indicators of value, and the pose significant risks to execution.
You don't have to agree, I really don't care; but this article sounds more like AI hype to spin to investors than meaningful advice.
Picking ML models that are difficult to replicate, hard to obtain data for and require a diverse team is not what you should be trying to do.
Focus on a problem that has value in an industry that has money. Collect as much data as possible to prototype and use the tool itself to collect additional data as it is used.
I don't see that as hype, I see that as a useful roadmap.