
The Artificial Intelligentsia - jamiehall
https://thebaffler.com/salvos/the-artificial-intelligentsia-timms
======
dsacco
Offtopic, but I have a really difficult time reading articles like this. I
don’t know if this reflects a problem with the style or my ability to focus,
but I find it really annoying:

 _> “SANDHOGS,” THEY CALLED THE LABORERS who built the tunnels leading into
New York’s Penn Station at the beginning of the last century. Work distorted
their humanity, sometimes literally. Resurfacing at the end of each day from
their burrows beneath the Hudson and East Rivers, caked in the mud of battle
against glacial rock and riprap, many sandhogs succumbed to the bends.
Passengers arriving at the modern Penn Station—the luminous Beaux-Arts hangar
of old long since razed, its passenger halls squashed underground—might
sympathize. Vincent Scully once compared the experience to scuttling into the
city like a rat. Zoomorphized, we are joined to the earlier generations._

This goes on for about seven paragraphs before I have any idea what the
article about. I understand “setting the scene” but I can’t tell whether or
not to care about an article if it meanders about with this flowing exposition
before indicating what its central thesis is.

It seems like a popular style in thinkpieces and some areas of journalism. The
author makes a semi-relevant title, provacative subtitle, and five - ten
paragraphs of “introduction” that throw you right into the thick of a story
whose purpose doesn’t seem clear unless you know what the article is about.
Rather than capturing my attention with engaging exposition, I find it takes
me out of it. But it must work if it’s so uniquitous; presumably their
analytics have confirmed this style is engaging.

~~~
sullyj3
The next sentence afterwards is a monstrosity:

"But, I explained to my work colleagues as the Princeton local pulled out from
platform eight and late-arriving passengers swished up through the carriages
in search of empty seats, both the original Penn Station and its unlovely
modern spawn were seen at their creation as great feats of engineering."

I had to highlight between the commas to get through that one.

~~~
maxxxxx
It seems growing up with German is a great preparation for such sentences :)

------
mogget
A thought: Don't let some of the (valid) criticism alone dissuade you from
reading this.

IMO the author makes some very valid points about fuzzy products and endpoints
in the current AI/data/ML/magic craze. These are under-articulated elsewhere,
because, well hey there's a lot of money flowing! Who wants to be a killjoy
and not "get it" (just like in 1999 ;)?

Two more specific points: 1\. The descriptions of the CEO are eerily familiar
to me. This guy is almost an archetype. Reminds me of a person I've worked
with in that role who was also associated with a similar-ish company. It
really paints the con-game side of all this.

2\. A deeper point (and worth the read for me) was the author's thinking about
how all this didn't fit existing needs and workflows and then has a chilling
thought: "It’s possible that the market for a user-hostile data system that
inaccurately predicts the future and turns its human operators into automatons
exists after all, and is large." You can make an argument that this kind of
thing has already happened in modern customer service and, with greater
negative impact, in healthcare. I.e. where the tail of easy metrics and
saleable endpoints ends up wagging the dog of quality.

~~~
ghostcluster
The problem, besides the condescending tone towards everyone around him, is
that he doesn't present an understanding of the actual state of the field of
AI and deep learning, and what's worse, he cites bad science essays that will
misinform more people about how a brain works.

There's a meme going round about how the best way to refute an argument is to
'steelman' it: present the best arguments of the opposing side before refuting
them. He doesn't do that here, which is one of the reasons I found it
frustrating.

I agree that the way the venture raising market works today _rightfully_
deserves some fair criticism.

------
fckedml
By this rate, looks like we need a "Fucked AI", in the style of
"fuckedcompany.com". [1]

These people were eating VC hype money to build Hagbard's FUCKUP from the
Illuminatus! Trilogy. [2]

Not sure who I feel more sorry for. The smart employees wasting years of their
prime chasing some unattainable pipe dream, the VC's who got suckered into
pouring their money into some vaporware precog technology, the author trying
to disguise a shit river with meandering prose, or my upcoming pay cut when
the AI winter sets in.

[1]
[https://en.wikipedia.org/wiki/Fucked_Company](https://en.wikipedia.org/wiki/Fucked_Company)

[2] First Universal Cybernetic-Kinetic-Ultramicro-Programmer (FUCKUP). FUCKUP
predicts trends by collecting and processing information about current
developments in politics, economics, the weather, astrology, astronomy, the
I-Ching, and technology.

------
indescions_2018
Excellent Sunday morning long read!

Some of PreData's recent "insights":

"China Trade War Fears Still Running High"

"Mall Blaze Sparks Outrage Across Russia"

In short, nothing that couldn't be revealed from the briefest skim of
headlines from tomorrow morning's WSJ.com. One can stay better informed
leaving a Bloomberg TicToc (which is partially machine generated) tab open all
day.

My takeaway is that the world of the Jim Shinns is rapidly approaching
extinction. Deals done poolside at country club dinner dances. Name game
shmoozing. And serendipitous encounters on private islands. What was
considered the predominant pathway to immortality in Fitzgerald's day.

Viable alternatives exist now. And any business model solely differentiated by
prestige will be subsumed by free or near-free competition.

~~~
hodgesrm
I enjoyed the article as well (see my comments above). But I would debate your
takeaway. The money quote is in the last sentence:

> Three months later, Predata secured a second round of venture capital
> funding.

People like Jim Shinn will always find a way. At least that's the argument the
author seems to be making.

------
d_burfoot
> Machine learning, the logic- and rule-based branch of AI supporting
> Predata....

That's a _really_ embarrassing mistake.

------
untangle
Flawed? You bet. Overwrought? A bit.

But I found this Sunday AM read enjoyable, articulate, and largely on-point
(overlooking a few minor scientific errors).

The core themes here are about the hubris of a rich CEO/founder, the zaniness
of the current AI "market," and their resultant effect on a particular NYC
startup.

This is a season of "Silicon Valley" (HBO) done east-coast, hedge fund, Ivy
League style.

------
atrexler
Outside of the firms owned/operated by the real clever boys, I wouldn't be
surprised if this describes the vast majority of "AI" efforts unfolding at
dozens/hundreds/thousands of companies. Everybody is getting on the bandwagon
and either don't have any clue or find out that their customers don't even
want what they are selling at the end of the day.

I'd be shocked if anyone in the industry hasn't worked for or with a Jim.
Spot-on.

------
hawktheslayer
Reading about what _Predata_ was trying to do reminds me of the field of
_Psychohistory_ in Asimov's Foundation series.

[https://en.m.wikipedia.org/wiki/Psychohistory_(fictional)](https://en.m.wikipedia.org/wiki/Psychohistory_\(fictional\))

------
not_a_moth
This startup's existence and failure and is yet another symptom of how we
grossly overestimate what AI can do. If the task isn't simple, repetitive, or
clearly defined, unlike the real world, it's probably not going to succeed.
Are there any AI startups that are an anti pattern here?

------
rdiddly
The point being made is: Technology without vision is dehumanizing. This is
widely known and is, for example, the reason good schools make undergrad
engineering students take at least a few humanities classes before they leave.

Technology without vision is dehumanizing - it happened with Penn Station,
where narrow quantitative and engineering goals displaced the broader human
ones and led to the widely-hated station that's there now, which was excavated
by people who were called hogs, and which makes passengers feel like rats. The
loss is especially acute there, since everybody knows what the old station was
like (
[https://duckduckgo.com/?q=old+penn+station&kp=-2&iax=images&...](https://duckduckgo.com/?q=old+penn+station&kp=-2&iax=images&ia=images)
). It was an edifice comparable to the great _gares_ and _bahnhöfe_ of Europe
(or to Grand Central which for some reason we decided to keep), a monument to
national power, industrial wealth, and the technologies of the time, but also
a space that evoked something a little more noble in the human spirit somehow.

The writer is also drawing a parallel with the dehumanizing effect of the
particular startup he worked for. The analysts are the hogs, he's the rat, his
own perceived loss of creativity (probably a bit exaggerated... aahhh youth)
is the dehumanization part, and the absentee CEO is the lack of vision. (If a
CEO has one function, it's to provide vision. And in second place, not far
behind, is to establish company culture.)

Arguably, placing technical/quantitative goals above more humanistic ones is
what an organization like Nazi Germany was all about. But obviously it's way
more complicated than that, and I don't intend to address it further.

I would point you toward Dmitri Orlov's concept of a _Technosphere_. Analogous
to the "biosphere" it models human technology as a quasi-intelligent entity
that is global in scope.

Book: [https://www.amazon.com/Shrinking-Technosphere-
Technologies-A...](https://www.amazon.com/Shrinking-Technosphere-Technologies-
Autonomy-Self-Sufficiency/dp/0865718385)

Excerpt (not much exposition but you'll get the point):
[https://cluborlov.blogspot.com/2016/02/the-technospheratu-
hy...](https://cluborlov.blogspot.com/2016/02/the-technospheratu-
hypothesis.html)

Everybody here are the ones who most need to hear this message. Some will
doubtless resist the criticism of ML/datasci with the fervor of someone whose
long-held religious belief is challenged for the first time. But you needed
that. Feel free to prove the critiques wrong, by the way... that's kind of the
whole point. Prove them wrong with broad projects that actually benefit
humanity instead of being a mess of unintended consequences and unimpressive
bullshit.

------
ghostcluster
I found the author to be slightly irritating on several occasions, dropping
veiled references to Valleywag-style anti Silicon Valley memes, and then I got
to the part where he regurgitates that idiotic article about the brain not
processing information, and there being something magical about human brains
that cannot be simulated [0].

He is right about his claim of having no right to be called a “director of
research", as it seems to me his skills center on cribbing thoughts pulled
from other people's thinkpieces. It's clear that he doesn't have a deep
background in either neuroscience or engineering and that he was brought to
the company from a background in business journalism.

In his condemnation of the state of AI research, there is no mention of
AlphaGo, or a description of the teachable pattern recognition techniques that
have swept the deep learning scene over the last 6 years.

I'm sorry to be so harsh, but there is a certain tone to this piece, "let's
hate all those startup a*holes", "Mark Zuckerberg can't write like F Scott
Fitzgerald because his knowledge of liberal arts is too limited, unlike mine"
that seems like a snooty class signaler among a certain hipster set.

There is a compelling story in here, but to me the general attitude is just
condescending to everyone around him.

[0] [https://aeon.co/essays/your-brain-does-not-process-
informati...](https://aeon.co/essays/your-brain-does-not-process-information-
and-it-is-not-a-computer)

~~~
goatlover
> idiotic article about the brain not processing information

How about the brain creates information from constant interaction with the
world based on the kinds of bodies we have and our needs/wants? This
information doesn't exist as information until the brain creates it.
Information is the product of minds. It doesn't exist in the world on it's own
to be processed. As such, the brain is something other than a computing
device. Computers exist because we figured out how to arrange physical systems
to process information that's meaningful to us. But to nature, it's just a
physical system (and not even that, since physics is a model of nature we
create).

That's Jaron Lanier's paraphrased argument against thinking of the brain as a
computer. To say that information exists in the world to be processed is to
make a metaphysical commitment that information exists ready made for us.

> and there being something magical about human brains that cannot be
> simulated

It doesn't have to be magical. There are different philosophical views on the
world and the mind which lead to different conclusions. If one takes the hard
problem of consciousness seriously, then consciousness cannot be computed. Not
because of magic or the supernatural, but just because consciousness is not
computable, since computation is itself an abstraction (Turing machines don't
exist on their own anymore than do any other mathematical systems). Unless
your metaphysics falls along the lines of Tegmark, Plato or Wheeler (it from
bit).

Instead you can think of The brain as an information creator. We give meaning
to the world. We build models. The world itself just is, it's not information,
math, physics or symbols.

~~~
adrianN
Computers interact with the world too. I'm looking at a screen that produces
patterns of light based on the internal state of my computer. How is this
different from a brain interacting with the world? The brain is a finitely
sized hunk of matter and matter seems to follow laws. We currently have no
reason to assume that those laws can't be simulated by sufficiently sized
computer, so anything observable the brain does a computer can do too.

~~~
goatlover
Yes, computers are physical systems. But what does their interaction mean
without humans around to interpret their output?

------
gaius
_Faced with the impossibility of determining whether a technology is
intelligent or not—since we don’t know what intelligence is—Silicon Valley’s
funders are left instead to judge the merit of a new idea in AI according to
the perceived intelligence of its developers. What did they study? Where did
they go to school? These are the questions that matter_

This is a perfect summary of the VC situation today. Too much money chasing
no-one knows what exactly, but they're sure they'll know it when they see it.

~~~
graycat
From all I've been able to see, that statement

"... judge the merit of a new idea in AI according to the perceived
intelligence of its developers."

about information technology VCs and AI is just totally wrong: I don't believe
VCs do that. Why? Generally, from 50,000 feet up, it's too far from the norms
of the accounting, banking, and investing communities respected by the limited
partners of the VCs. Uh, the limited partners (LPs) are where the VCs get
nearly all the money they invest, and the limited partners are conservative
people, managers of pension funds, university endowments, etc. Not only do the
VCs not do that, the LPs won't let the VCs do that!

Instead, about the shortest believable view I can see is, VCs look for
_traction_ that is significant and growing rapidly in a market large enough to
permit a company worth $1+ billion in a few years.

The VCs view of _traction_ is a weakening of the usual measures the
accounting, banking, and investing communities use and respect of audited
revenue and earnings.

So, sure, the best form of _traction_ will be earnings, then next best,
revenue, next best lots of interested customers, e.g., advertisers willing to
pay for eyeballs, then last best, just lots of eyeballs. In these norms,
intelligence, brilliance, AI, technology, etc. are mostly publicity points,
window dressing, the wrapping paper on a birthday gift, and with a dime won't
cover a 10 cent cup of coffee.

In a sense, the VCs have a good point, more from insight into humans and the
real world than anything in a _pitch deck_ : (1) With technology, it's too
easy to push totally meaningless, useless BS. (2) Carefully studying core,
deep, difficult technology is just too darned difficult to be practical for
the VCs.

Or the investors believe in a Markov assumption: The future of the business
and the technology from the past are conditionally independent given the
current traction, its rate of growth, and the size of the market. To be clear,
this Markov assumption does not say that the technology and the future of the
company are independent.

The stories in the OP about the company Predata, to abbreviate "predictions
from data", are good: The company was floundering around with guesses about
what would work, e.g., for predicting terrorist attacks, that were like
something from smoking funny stuff.

But here is one big place the VCs and technology are going wrong: We do have
some terrific examples of how to do well. The examples are from the past 70+
years of the unique world class, all-time, unchallenged grand champion of
using advanced, even original, technology for important practical results --
the US DoD.

A grand example is GPS. GPS was by the USAF, but it was a refinement of an
earlier system by the US Navy, for navigation for the missile firing
submarines and started at the Johns Hopkins University Applied Physics
Laboratory JHU/APL. At one time I worked in the group that did the original
work and heard the stories. A key point: The original proposal was by some
physicists and almost just on the back of an envelope. Soon the project was
approved and pushed forward with a lot of effort. Then, presto, bingo, it all
worked just as predicted on the back of the envelope. E.g., a test receiver on
the roof navigated its position within one foot, plenty accurate enough for
the US Navy.

So, net, for project selection and funding, here is the shocking, surprising,
point that the VCs miss: Really, given the back of the envelope work, the rest
was relatively routine and low risk.

And the past 70+ years of the US DoD is awash in comparable examples.

In blunt terms, the US DoD has a fantastically high batting average on far out
projects evaluated just on paper. Given good evaluations of the work just on
paper, the rest is relatively routine and low risk.

Well, that project funding technique does not fully solve the problem of the
VCs: The VCs also need to know that the resulting product will have big
success in the market. But for that there is an okay approach: The dream
product would be one pill taken once, cheap, safe, effective, to cure any
cancer. In that case, the technology is so good for such an important
practical problem in such a large market that there's no question about making
the $1+ billion. So, from this hypothetical lesson, net, need the technology
to be the first good or a much better solution, a "must have", for a really
pressing problem in a big market. So, right, this filter would reject
Facebook, Snap, and more. So, right, need to start with a really big problem
where with new technology, say, as in the US DoD examples, can get a "must
have" solution for a really big problem, and Facebook and SNAP are not such
problems. Just what are such problems? That's part of the challenge. But with
current VCs, come up with such a problem and a solution on paper, with
brilliant founders, with AI, etc., then still will need more than a time to
cover a 10 cent cup of coffee. Again, to get VCs up on their hind legs, bring
then good data on traction, significant and growing rapidly in a large market;
if the _secret sauce_ technology helps, fine; brilliant founders, fine; even
if there is no technology, fine; in all cases, what really matters is the
traction.

~~~
untangle
I think that you are over-generalizing. VCs use a number of disparate
investment theses, including gut feel and betting-the-team in a "hot"
(trendy?) space. Another dynamic is funding a team that previously produced a
big win for the VC firm (as appears to be the case here).

And do you have a reference for the "fantastically high batting average" of US
DoD research? Are you familiar with the SBIR program, for example?

I would judge that neither DoD/DARPA nor VCs have a great batting average. But
both have some spectacular wins.

~~~
graycat
> VCs use a number of disparate investment theses

To be more clear, I believe that such other issues, often mentioned, some on
the Web sites of VCs, are nearly all just smoke to hide what I listed as the
main issues. In particular, of course, I was pushing back against the
statement I quoted from the OP -- their statement was much worse than mine!

But here on HN, I warn entrepreneurs who have already sent 100+ e-mail pitch
decks to VCs: I gave my best guess on really how VCs select deals.

Batting average reference? I'm not considering the SBIR program at all. E.g.,
GPS, coding theory, e.g., as part of radar, lots more in high end radar, e.g.,
phased arrays, Keyhole (a Hubble, before Hubble, but aimed at the earth), the
SR-71, the F-117 stealth, the SOSUS nets and adaptive beam forming sonar, some
of ABMs, a huge range of parts of the SSBNs, high bypass turbo fan engines,
the nuclear power reactors on the submarines and air craft carriers of the US
Navy, and much more were not SBIR projects. I am drawing from early in my
career in applied math and computing for problems of US national security
within 100 miles of the Washington Monument and comparing with what I've seen
in VC work.

The Navy's work on rail guns looks darned promising.

For DARPA, yes, they flop a lot, on their batting average, much more than the
rest of DoD, but DARPA also has some spectacular wins. E.g., of course,
TCP/IP. And they fooled me on their autonomous vehicle "challenge": While I
believe that autonomous vehicles are a long way from being ready for real
roads with real traffic, I can believe that so far already the DoD has gotten
some good progress for some cases of logistics. E.g., one of the issues in
Gulf War I was truck drivers. There an issue was that a lot of the drivers for
the US were women, and the Saudis didn't like women driving vehicles. So,
there was a trick, a deal: The US and the Saudis agreed that when the women
were in uniform and driving US military vehicles, they were "soliders" and not
women. Otherwise they were still women and could not drive!!!

Uh, the robots of Boston Dynamics are impressive, maybe still less good on
legs than a cockroach, but already or well on the way to being useful for the
US Army.

------
visarga
It's an ad for their company posed as an opinion piece.

~~~
gaius
_It 's an ad for their company_

I guess I'm not in their target market then because it reads like a hit piece
- so much so that I was sure that all the names were changed!

~~~
jstewartmobile
I Googled because it sounded too harsh to be real, but they are all real
people!

The author is out of his damn mind for not changing names, but NMP.

~~~
aerodog
He said he didn't have to sign an NDA. I suppose he felt free to say what he
wished!

