
Report: Smart contact lens and other ambitious projects at Verily are floundering - ravichandran_s
https://www.statnews.com/2016/06/06/google-star-trek-fiction/
======
djrogers
I think the biggest problem here is that we have too much media coverage and
scrutiny on stuff like this today. 24x7 financial news, 24x7 tech news, and
24x7 financial/tech news!

Here's the deal - these are HUGE bets, with a HUGE risk of failure, and a HUGE
reward. As a society we should be focusing on the willingness to take these
bets, and allowing them to fail without pointing an laughing like a bunch of
school kids at a science fair.

Scientific history is full of failure, and examples of failure leading to
discovery and advancement. Let's embrace it and repeat after me - "Failure is
Always an Option".

~~~
bravo22
True. However Google PR tells the media about these bets in the first place.
It used to be the case that R&D was kept under wraps. I'm sure a number of
projects fail at Apple too but we never hear about them because Apple doesn't
leak that info to the press.

~~~
beambot
Apple is a bad example; their secrecy means that humanity learns virtually
nothing from most experiments. Most experiments fail -- either abject
technical failures or merely commercially unviable at the time of their
development. Hiding these away is a shame. People could learn from failures
(not spending resources pursuing bad paths) or unviable explorations (revisit
them later when conditions change).

I think a more useful example is Bell Labs and their Technical Journal. The
journal involved a more-significant level of rigor without the PR and media
hype.

Unfortunately, when the media follows every little happening (eg. patent
publications)... big megacorps are left with two bad choices: let people draw
erroneous, hyperbolic conclusions ("OMG! Ads in my eyes!") -or- address the PR
and messaging directly head-on.

Publication may still be the solution, but the timing issues are acute. From a
business perspective, you need to be filing IP as soon as you have reasonable
evidence of a proof of concept. But by the time you have sufficient material
for top-tier journal peer review, those patents are already being published.
The solution _might_ be (again) something like the Bell Labs Technical
Journal... but that provides a lot of fodder for competitors and patent
trolls. So it's a tough balance!

Either way... I applaud large organizations taking big technical risks and
bets.

~~~
bravo22
My post was in response to: "I think the biggest problem here is that we have
too much media coverage and scrutiny on stuff like this today". The coverage
happens because Google talks about it; irrespective of merits of taking risks.

~~~
beambot
Yes... I know. And I'm saying that Google talks about it because patent
publication forces their hand. They must either own it and talk about it
themselves, or let people wildly speculate when patents are published.

------
T_Fri
I think that instead of negatively reinforcing being "quixotic"(overly
idealistic) and stumbling when taking on a complex project, let's positively
reinforce the fact that they had the fucking balls to try.

In future, people will be able to use this project (including it's
shortcomings) as a learning tool.

Kudos to the team for biting off more than they could chew, for now we all
have a better of idea of what size bite to go for next time.

~~~
CPLX
I've never really understood this logic, primarily because as a theory it's
not really falsifiable. If the concept could be applied to anything then it
has lost all normative powers.

Should we positively reinforce people who jump of cliffs convinced they have a
novel scientific theory of gravity that will allow them to fly? Presumably
not. Then presumably we should take a similar view of those who attempt things
that are similarly scientifically impossible, even if said impossibility is
less obvious to a lay observer.

~~~
embwbam
Maybe the first person to do so, sure. One of the reasons you know that never
works (besides physics :) is that people have tried it.

------
PLenz
I'm beginning to think that Alphabet (read: Google) just isn't very good at
things that aren't related to advertising. There have just been so many
failures (Verily, Nest, Glasses, etc.) as they have tried to get away from
their core business - it almost looks like IBM back in the 90's as they tried
(and mostly floundered at) becoming a services company over a mainframes
company.

You can argue that they are trying 1000 things and 999 can fail as long as
that 1 succeeds - but I don't think this is generating a good public view of
them.

~~~
unprepare
has their car been a failure? arguably one of the best economic upsides is
being the first one to have a fully road-ready autonomous driving system.

chromebooks seem to be gaining in popularity (though that may be driven by
shortsighted education purchases like the early iPad)

their broadband services seem to be incredibly popular in the areas they are
servicing. (this is just anecdotal, i havent seen any hard numbers)

But i agree, they have had some high-profile failures in recent history, and
its possible thats a sign of the organization growing to
large/complex/whatever to be successful at exploring new product categories.

~~~
ocdtrekkie
Their car will be a failure/footnote. Car manufacturers are already doing far
more interesting things than Google is in the space. While Google is obsessed
with replacing human drivers, and telling people to just trust the car,
companies like Volkswagen have worked on self-driving UI that works with the
user, telling them before the car makes a maneuver and such. While Google
hasn't tested in snow, other car makers (like Ford) have.

Google's really only power in the self-driving car space is that Google is
rolling out Apple-level marketing on the topic. People associate "self-driving
car" with "Google" by default, because that's where all the media hype is.

But when it comes down to market time, car manufacturers know how to make
cars. They know how to make vehicle interfaces people are comfortable and
familiar with. They know how to built reliable, relatively bug-free products
that don't need a software update every week to work correctly.

------
nl
Well I'm thankful that Google is prepared to take big bets in health science,
even when they don't work. Much better to spray money at this problem than -
say - Nest.

Hopefully these are some of Edison's 9999 lightbulbs that didn't work.

~~~
dpcx
"even when they don't work".

While they may not "work" (read: they may not have the outcome that was
expected from the beginning), they still "work" in that they help Google (and
others) learn how to improve other things later.

It may not be the most fiscally responsible thing to do, but learning isn't
cheap when you're trying to solve the problems they're trying to solve.

------
mattfishburn
If you want to learn more about the history of non-invasive glucose monitoring
mentioned in the article, I recommend "The Pursuit of Noninvasive Glucose:
Hunting the Deceitful Turkey" by Smith:

[pdf]
[http://www.mendosa.com/The%20Pursuit%20of%20Noninvsive%20Glu...](http://www.mendosa.com/The%20Pursuit%20of%20Noninvsive%20Glucose,%20Fourth%20Edition.pdf)

------
_ihaque
Stat has covered Verily in detail in the past; this article from March argues
that the problem stems from the leadership:

[https://www.statnews.com/2016/03/28/google-life-sciences-
exo...](https://www.statnews.com/2016/03/28/google-life-sciences-exodus/)

    
    
      Former employees, however, characterized Conrad in less complimentary tones.
      They said he exaggerates what Verily can deliver, launches big projects on a
      whim, and rashly diverts resources from prior commitments to the next hot 
      idea that might bring in revenue. This has led to what they describe as
      difficult meetings with business partners, and resignations by demoralized 
      engineers and scientists in the face of seemingly impossible demands.

------
LionessLover
As an IT person with only a degree in CS and historically little interest
except for IT and maybe some engineering topics the best time I invested in
decades was to start - completely by accident - on the medical track.

Hundreds of lecture hours and a plethora of courses in chemistry, org.
chemistry, drug development (chemistry), statistics (with focus on medical
topics), anatomy, physiology, (some) bio-chemistry, and neuroscience later I
have learned sooooo much.

All simplistic/optimistic (or in the case of antibiotics pessimistic) mass-
media headlines are almost complete bullshit. In that field you MUST look at
the details. Which of course require some knowledge to understand - but
actually not all _that_ much, just some basics. I've also met quite a few
doctors by now, some of them in research, and see a common path from youthful
optimism to disillusionment. Not as in "giving up", just getting rid of
baseless optimism about what is achievable and with what effort, and resetting
goals (to much lower levels).

For an easy example, how many IT people think the brain works like a computer?
That it's "binary based"?

For illustration, here are two serious (but fun) papers written by a biologist
and by a neuroscientist, respectively:

"Can a biologist fix a radio?—Or, what I learned while studying apoptosis" \-
[http://www.cell.com/cancer-
cell/fulltext/S1535-6108(02)00133...](http://www.cell.com/cancer-
cell/fulltext/S1535-6108\(02\)00133-2)

"Could a neuroscientist understand a microprocessor?" \-
[http://biorxiv.org/content/early/2016/05/26/055624](http://biorxiv.org/content/early/2016/05/26/055624)

I think engineers benefit greatly from studying some (freshman university
level) biology, bio-chemistry, physiology. In engineering everything is so
much simpler, so one tends to try and use ones experience in ones own field
when looking at those fields. In reality it's a great mess. If you had to
debug source code written by evolution over more than a billion years you'd
have no chance - no chance at all - with any of the methods you used. Even the
most disgusting project messed up over decades by rotating hoards of
unqualified programmers who in addition all couldn't care less and managers
who magnified the mess could not remotely create a software project as messed
up.

Imagine Microsoft Office Suite source code run through several obfuscation
tools. On top of that, the test suites only test an incredibly tiny amount of
potential inputs _(the amount of "inputs" where a biological system fails
exceeds the ones when it works by several orders of magnitude)_ \- and they go
"green" as long as the software starts and runs at all for some very small
amount of time, even if it runs badly. An it's _still_ easier to debug than
even a tiny biological system.

~~~
paulmd
What exactly is sensationalized about antibiotic headlines? My understanding
is that they are indeed massively under-funded in research and development
while also being massively abused by the livestock industry (particularly
overseas, where drugs-of-last-resort are being used on livestock) and general
practitioners (although a tiny factor in comparison). I will grant you that
the US is trying to take steps to crack down on livestock use, at least.

I don't actually know any IT people who think the brain works like a computer,
or that it's "binary based". I know people who think that neural nets attempt
to approximate the neural activity of the brain, and also that it's a pale
imitation at best, with many modes of action not properly modeled. I know
people who think that free will is a myth and that everything we think is just
chemical and electrical reactions in the brain, which is likely correct.

~~~
LionessLover
Honestly, your reply doesn't appear to be meant to be constructive. I just
have to casually scan major news sites for headlines and what is the first one
I find when asking google for two search words (not the first search result
but the first one relevant to the topic - I only used two words, as I said):

BBC, 2015:

> Analysis: Antibiotic apocalypse

> A __terrible __future could be on the horizon, a future which rips one of
> the greatest tools of medicine out of the hands of doctors.

Note the "apocalypse" and the " _terrible_ " future. Of course, not
sensationalized at all...

As for part 2 of your comment all you have to do is read comment sections.

Yes, those who _ask_ are told that is not so, but how many ask?

Even "scientists": "The scientist planning to upload his brain to a COMPUTER:
Research could allow us to inhabit virtual worlds and 'live forever'" \-
[http://www.dailymail.co.uk/sciencetech/article-2879803/The-s...](http://www.dailymail.co.uk/sciencetech/article-2879803/The-
scientists-planning-upload-brain-COMPUTER-Research-allow-inhabit-virtual-
worlds-live-forever.html)

"The Singularity Is Near: Mind Uploading by 2045?" \-
[http://www.livescience.com/37499-immortality-
by-2045-confere...](http://www.livescience.com/37499-immortality-
by-2045-conference.html)

I'll stop here. And that's from people who actually thought about that
subject, in comment sections you find replies from normal people who don't
make headlines.

Remember the excited comments when the Human Brain Project (HBP) was
announced? I just looked at some comments on this very website when the
announcement was being discussed... Or a random brain related Slashdot
headline: "Why We Should Build a Supercomputer Replica of the Human Brain"
([https://slashdot.org/story/13/05/15/1926252/why-we-should-
bu...](https://slashdot.org/story/13/05/15/1926252/why-we-should-build-a-
supercomputer-replica-of-the-human-brain)) - actually another person who even
_looks_ at the topic a lot and still comes up with such ideas (never mind the
comments below).

------
realworldview
I, for one, am not surprised. Are there tax advantages associated with this
type of failure?

~~~
realworldview
Down vote and no reply to a valid question. What a surprise...

~~~
dang
Please don't comment about getting downvoted. There's more than one HN
guideline about this:
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html).

In this case I imagine the downvotes were because "I for one am not surprised"
sounds glib and dismissive (and kind of braggy) without adding information.
The question that follows it is fine, although to make it seem like less of an
insinuation you could have added some (neutral) context for why you were
asking.

