
The Competitive Landscape for Machine Intelligence - sdebrule
https://hbr.org/2016/11/the-competitive-landscape-for-machine-intelligence
======
emcq
The PDF [0] is a bit of a grab bag and not as nicely organized as I would have
expected from the past landscapes they've produced.

It's likely this coming from a less technical perspective but roboadvisors
like Betterment or Wealthfront are not really examples of machine
intelligence. Their whitepapers describe the techniques they use to craft
their portfolios [1]. At best they use optimization on a predictive model, but
it seems highly likely that they have manual input. They create a set of
recommendations and execute them for you. There isn't much learning, data
mining, or automated processing from data happening there.

The "Agent Enabler" section seems like its trying to get at foundational
reinforcement learning companies but isn't self consistent with the examples
provided.

They left companies like the Allen Institute and DeepMind off the research
section.

It's easy to go on, but I think they need a technical editor next time :)

[0] [https://hbr.org/resources/pdfs/hbr-
articles/2016/11/the_stat...](https://hbr.org/resources/pdfs/hbr-
articles/2016/11/the_state_of_machine_intelligence.pdf) [1]
[https://research.wealthfront.com/whitepapers/portfolio-
revie...](https://research.wealthfront.com/whitepapers/portfolio-review/)

~~~
huffpopo
Go check out their past investments. It's a joke even by SV standards. It
further highlights that they don't have anyone technical on hand that can
point out obviously bad ideas. E.g. Gigster.

Gigster can't even filter out all of the bad customers that Godaddy
intentionally sends their way. Every additional bad customer costs them in
time, money and reputation. Yet they have the gall to claim they're AI
powered. Amatures.

------
the_decider
Under the "Audio" section, we've got Quirious, TalklQ, Twilio; melodious names
that end with twirling, soft rhythm. Under "Internal Data", we've got Cycorp
and Palantir and Primer; hard-edged P-prominent words implying secrecy and
stead-fast solidity.

------
jmickey
Regarding this - "Model here means business rules, like rules for approving
loans or adjusting power consumption in data centers. In traditional software,
programmers created these rules by hand. Today machine intelligence can use
data and new algorithms to generate a model too complex for any human
programmer to write."

Isn't it a bit problematic that the business rules generated by the model are
too complex for humans to reason about them? How can you rely on the rules to
be 100% appropriate for the task if it's impossible to reason about them?

~~~
iforiq
By "generate a model too complex for any human programmer to write" I believe
the author is trying to say, to manually create the rules, one by one. Machine
generated complex models, even though very complex, can definitely be
understood and heavily audited.

One example is when you fit sparse high dimensional models to complex data in
a real-time production system. The resulting models may have hundreds of
millions to billions of features with non-zero weights, that constantly change
as the underlying data changes. It's impossible to "hand-code" such a model
from scratch by any reasonable size team in real-time. On the other hand,
these hundreds of millions of rules can (and should) be exhaustively analyzed
/ audited by slicing and dicing both the model feature-weights, as well their
performance on the data comprehensively. As an example, the "R" programming
language typically creates useful human interpretable summaries for the models
it generates.

For reference, I have been involved at Google in building such massive high
dimensional models for properties like Youtube, and currently a founder of one
the companies in the HBR report (LiftIgniter, YC W2014). Hopefully that
doesn't make me too biased to respond.

~~~
jonathankoren
If you're looking at model feature weights, you're doing it wrong.

Most models aren't interpretable, and coefficients are highly unit and feature
dependent. Discussions involving feature weights beyond "What if we reduce the
feature space?" or "Did we implement this feature correctly?" often go bad,
they almost always go bad when you're using it to "audit" the model. I have
been in way too many discussions where someone suggested that the weights were
wrong, simply because they thought something should "be more important".

