
Vicarious Systems Says Its Artificial Intelligence Is The Real Deal - churp
http://blogs.wsj.com/venturecapital/2011/02/10/vicarious-systems-says-its-artificial-intelligence-is-the-real-deal/
======
cbcase
I cannot understand why it is that so many obviously very intelligent people
decide that we need another computer vision-based startup. Because the
unfortunate truth is that computer vision (right now) doesn't work.

Let me qualify that. From the academic / research point of view, there have
been a collection of real successes in computer vision in, say, the last ten
years. But my sense is that what counts as a research success is a long way
from what counts as a practical business success.

For example, the best generic object detector at the moment is probably
Felzenszwalb's using deformable parts-based models[1]. And it's just not that
good. On the latest PASCAL object detection challenge, you'll see that its
mean precision is only ~30%.

Scott Brown, the interviewee, sets Vicarious apart by highlighting the fact
that their system will be neurobiologically inspired. But the idea of learning
hierarchical systems that mimic the brain's visual processing system is hardly
new, and the jury is still out on whether these systems can do better than the
"hand-coded" systems like Felzenszwalb's. As a random example, see [2].

Like.com showed you can build a business that uses computer vision in some
way. But as Brown snarks, they "use a big bag of different heuristics to
figure out the image." For the time being, that seems to be the only way to
get computer vision to work in practice.

That all said, I wish them luck.

[1] <http://people.cs.uchicago.edu/~pff/latent/>

[2]
[http://www.cs.stanford.edu/people/ang//papers/nips07-sparsed...](http://www.cs.stanford.edu/people/ang//papers/nips07-sparsedeepbeliefnetworkv2.pdf)

~~~
JshWright
Your argument against computer vision startups is that there isn't a viable
computer vision solution at this point?

~~~
neutronicus
Well, his argument is that well-funded, very intelligent people are trying
like hell at computer vision, and not succeeding. That's not a good sign -
you'd prefer that your space has been hitherto overlooked by smart people with
lots of money.

------
aothman
As an AI grad student, this kind of sensationalism is somewhere between a
minor irritation and a serious threat. AI always has had a severe problem with
over-promising and under-delivering, and I'm of the humble opinion that until
you're actually shipping the most awesome thing in the world you should keep
your mouth shut. If the first thing people associate "AI research" with is
"disappointment", that hurts everybody (particularly, NSF funding).

"Brain-based" AI should stay in the dark ages. Optimization-based AI is the
present and the future.

(That said, if you want to talk about your sweet computer vision system that's
"coming soon", go right ahead. Just don't call it AI.)

~~~
euroclydon
Does Numenta fall under "Brain-based?"

~~~
snikolov
Numenta does fall under brain based. I'm not sure what Vicarious are working
on, but recently Numenta transitioned to radically more biological algorithms.
It would be interesting to compare the two if Vicarious comes out with more
detailed information about their algorithms.

------
asknemo
Speaking from a researcher (both academic/industry) in vision for almost 10
years, I am afraid that they founders have quite underestimated the difficulty
of the problem. Even a dog's visual system is very advanced, if you consider
it from the big picture in evolution of visual sensory system in animals. So
in the interview "if you can make a vision system that’s just as good as a
dog" is in some sense analogous to saying "if you can simulate what's produced
from 90% of visual evolution over these million years", which is clearly, a
bit over-optimistic as a starting goal.

That being said, wish them luck. It's a worthy try afterall.

------
endtime
More accurately: Vicarious says it is trying to make its vision system the
real deal. Smaller problem, and (at least according to the article) they
haven't solved it yet.

Of course, smaller and small are different things - this is still a very hard
thing to do. Hope they succeed.

------
fleitz
Does this mean the AI winter is over?

~~~
rst
Depends what you mean. "Good old fashioned AI" (explicit symbolic
representations and rule-based inference) hasn't made a comeback yet. Even
computational linguistics has largely shifted towards statistical methods
based on crunching large amounts of data. So, AI is a lot livelier now than it
was in the bad periods, but it's a different kind of AI than went cold in the
late 1980s.

~~~
merijnv
I'm not an AI student myself (I do CS, but with an interest in AI and
probabilistic algorithms) and since I'm from a younger generation most of the
"new" AI stuff like neural networks, evolutionary computing, etc. wasn't
_that_ new any more when it got taught. But "classical AI" (as in symbolic AI
and rule-based systems) always struck me as rather dumb and inefficient after
being shown the power of probabilistic systems like EC and neural networks. So
in that sense I don't think classical AI will never make a comeback, and I'm
convinced this is probably a good thing.

I mean, the only example we have of solving extremely complex problems is
nature, and nature just doesn't work with symbolic system and rule-based
inference. It uses probabilistic systems which interact with each other in
feedback loops.

I, for one, welcome our new non-deterministic overlords.

------
abhikshah
Interestingly, one of the cofounders is Dileep George, previously the CTO and
cofounder of Numenta.

------
giardini
From the article

"if you can make a vision system that’s just as good as a dog..."

Not quite my idea of "The Real Deal". And that's within a 5-year plan.

~~~
snikolov
Dogs' visual systems are pretty sophisticated. Trying to mimic one of those
first allows one to somewhat simplify things while getting a lot of insight
into the human visual system which operates on basically the same principles.

------
yters
NFLT implies AI is logically impossible. Good way to short the stock market.

~~~
ebiester
Considering NFLT isn't on the first page of google, would you mind expanding
that acronym?

~~~
haliax
If the parent means the No Free Lunch Theorem, then the point is mistaken.
That theorem says that you can't improve the performance of a classifier for
some objective function, without making it worse on another -- in other words,
all algorithms have an identical mean performance when averaged over all
possible objective functions.

The reason this doesn't mean that human-level AI is impossible is that we too
are designed (well, evolved by natural selection) to perform well for a
particular objective function: one in which say, the standard laws of
physics/optics apply. Optical illusions illustrate that our performance on
this objective function is not perfect.

Moreover, you can see a human being's performance on a different objective
function by, for example, trying to recognize objects in pictures which have
been scrambled according to some predefined method (e.g. shuffle the pixels
but use the same random seed each time). Each scene will still convey the same
amount of information about the objects in it, but it'll be pretty tricky to
recognize the objects.

~~~
yters
"The reason this doesn't mean that human-level AI is impossible is that we too
are designed (well, evolved by natural selection) to perform well for a
particular objective function: one in which say, the standard laws of
physics/optics apply."

That's an assumption no one has ever given the slightest shred of evidence
for. I remain highly skeptical.

