
ARCore: Augmented reality at Android scale - transcranial
https://www.blog.google/products/google-vr/arcore-augmented-reality-android-scale/
======
objclxt
> ARCore will run on millions of devices, starting today with the Pixel and
> Samsung’s S8, running 7.0 Nougat and above. We’re targeting 100 million
> devices at the end of the preview.

That's...not great? For comparison, ARKit on iOS is going to support 400
million devices at launch (very rough numbers: ARKit runs on any new iPhone
Apple's released over the past two years - iPhone 6S/SE/7 - and they sell over
200 million a year). Hardware fragmentation is a tough problem to solve.

~~~
JosephLark
The exact same thing stuck out to me as well. Mostly because they had to start
the article with this sentence:

> With more than two billion active devices, Android is the largest mobile
> platform in the world.

So they'll get it onto 5% of active devices, or one in every twenty. Not
great, perhaps not even good.

I've not been a fan of android fragmentation for awhile now, and have been
surprised that Google hasn't been able to do more to attack the issue. Even
when Treble launched, I asked myself... is that all? Rock and hard place I
guess.

I actively avoid android devices because of the issue. My last android device
was a tablet in 2011.

~~~
rsp1984
The purpose of ARCore is for high-end Android devices to be competitive with
the iPhone technologically, not to get on 2 billion devices, at least not now.

Therefore it's fine that they get only 5% to start. I assume it's going to
turn into 10%, then 20% quickly. For some perspective the iPhone only has
about 15% market share globally.

~~~
s73ver_
It'll only do that if ARCore itself takes off. With such small penetration,
it's entirely likely that developers will just focus on Apple's ARKit.

~~~
rsp1984
_likely that developers will just focus on Apple 's ARKit_

Most likely someone will just build and offer an abstraction layer on top of
ARKit / ARCore and solve the problem that way.

~~~
Benjamin_Dobell
Google have already provided this themselves, plus hooked it up to a 3D
rendering engine:

[https://github.com/google-ar/three.ar.js](https://github.com/google-
ar/three.ar.js)

That's assuming you're okay with JS, but if you're going cross-platform then
it makes sense if you need to have interactive scenes.

~~~
pjmlp
The problem with WebVR, is that just like WebGL, it is capped, versus what is
possible in actual OpenGL ES.

~~~
Benjamin_Dobell
I've been developing WebGL software for a client for the last few years and my
client has had quite good success selling content based on the software and
consumers are using the content. The vast majority of browsers fully support
WebGL (1).

The biggest performance bottle-neck is the JS itself, specifically stock
three.js is quite inefficient. However, that's just a matter of optimising
three.js, or using WebGL directly if you need more fine-grained control. You
can get a lot done in a browser these days.

~~~
pjmlp
I have seen impressive WebGL demos, but for some reason WebGL always makes the
GPU sweat more than plain native code driving my fan at full speed, even
though it offers less.

Maybe it is a consequence of using JS, I have never researched it.

------
sidlls
Occasionally I'll write an app for my kids or wife. Every time I'm thoroughly
impressed by the Apple development ecosystem and thoroughly disgusted by
Google's for Android.

This is no different. The Android development process is painful (the most
verbose, cruft and boilerplate filled Java), cumbersome to organize and build
(Gradle is terrible, and buggy) and debug (the integration with Studio is just
clunky). About the only thing Google does better is testing releases through
the developer console.

It's nice to see them finally providing something similar to ARKit. I just
wish they'd work on all the other things that make Android development a
horrible experience.

~~~
prophesi
Eh, they've both got their pros and cons.

iOS:

\+ Swift

\+ Less screen sizes to worry about

\+ Less iOS versions to worry about

\+ XCode is much lighter on resources

\- Mac only

\- XCode might crash every now and then

\- Probably need an iOS device, as the simulator is very slow

\- $100/yr developer fee

Android:

\+ Kotlin

\+ Studio runs everywhere

\+ No developer fee

\+ More stable IDE

\+ Decent emulation

\- Countless screen sizes to worry about

\- Lots of Android versions to worry about

\- $25 developer account fee

\- Android Studio is resource heavy

\- NDK requires lots of JNI boilerplate

Though, with all of this AR stuff, I'd just go the Unity/Unreal route, as it
will probably be very game-y and such.

~~~
pjmlp
Android:

\- Developer fee of $25 if you plan to actually publish to the store

\- Needs a quadcore Xeon with at least 16 GB and SSD to have an usable
experience with Android Studio, or configure it to run in laptop mode

\- NDK requires lots of JNI boilerplate to call about 80% of Android APIs

~~~
com2kid
> \- Needs a quadcore Xeon with at least 16 GB and SSD to have an usable
> experience with Android Studio, or configure it to run in laptop mode

I'd disagree with the Xeon bit, I have a 6 year old Sandy Bridge quad core,
and Android Studio runs butter smooth.

I'll confess to the 16GB of RAM and an SSD though. Although honestly an SSD
now days is required for anything to be usable.

Android Studio is amazingly performant though, the Emulator is great, ignoring
bugs and glitches and the occasional times it just stops working until I flip
enough settings back and forth that it starts working again.

Of course a huge benefit is that I don't need Apple hardware to develop for
Android.

~~~
pjmlp
I also rather develop for Android, but Android Studio resource requirements
made me appreciate Eclipse again.

Apparently AS 3.0 will be better on that regard.

~~~
com2kid
> I also rather develop for Android, but Android Studio resource requirements
> made me appreciate Eclipse again.

There is a reason my dev machine is a Desktop. Better keyboard, better
monitor, better performance. 6 year old machine, cost about $1500, performs
better than the ultraportables a lot of people try to press into service for
writing code. Even with a faster CPU, thermal throttling is a concern once the
form factor gets to a certain size.

~~~
pjmlp
We don't get to chose what we get.

Usually the customer's IT assigns hardware to external consultants.

~~~
com2kid
Ah interesting, when my team used external consultants, we did the inverse, we
gave the consulting company a beefy requirements list and told them anyone
sent to work for us must be at least that well equipped.

Paying by the hour, we were heavily motivated to minimize compile times. :)

------
nobbis
I commented here on Tango 3.5 yrs ago: "remains to be seen if Google can
persuade cellphone manufacturers to include 2 special cameras + 2 extra
processors in their future devices." Looks like that was the case.

It appears the ARCore API is well designed and 1-1 feature equivalent to
ARKit, i.e. VIO + plane estimation + ambient light estimation. The API's even
share a lot of names, e.g. Anchor, HitTest, PointCloud, LightEstimate.

Now that stable positional tracking is an OS-level feature on mobile, whole
sets of AR techniques are unlocked. At Abound Labs
([http://aboundlabs.com](http://aboundlabs.com)), we've been solving dense 3D
reconstruction. Other open problems that can be tackled now include: large-
scale SLAM, collaborative mapping, semantic scene understanding, dynamic
reconstruction.

With Qualcomm's new active depth sensing module, and Apple's PrimeSense
waiting in the wings (7 yrs old, and still the best depth camera), the mobile
AR field should become very exciting, very fast.

------
eco
It seems very odd that this comes out and seemingly replaces Tango. Google
spent a lot of time going over new stuff with Tango in the latest Google I/O
keynote and Google Lens, which was featured quite a bit, seems like it relies
on Tango and its depth sensing hardware for their "VPS" stuff.

Also, when Clay Bavor was talking about Tango supported devices he remarked
that the devices were getting smaller and smaller then implied it was coming
to smaller, more traditional devices. I took this to mean they were close to
getting the sensors ready for wide deployment but I suppose this could have
just meant they were ditching the sensors because they felt the software was
good enough.

I'm kind of disappointed. I'd hoped that he was saying that Tango sensors
would show up on Pixel 2 (which was a long shot, from the leaked photos not
really showing the many sensors you see on current Tango devices). Instead we
have what feels like a rushed out me-too to match ARKit.

~~~
randyrand
This does not replace tango, at least not technically. Tango has a depth
camera which is not replaced here. The depth camera is useful for a variety of
applications.

I hope tango will be continued to be developed. It is more robust for position
tracking, and can do 3d scanning, etc.

~~~
DiThi
> and can do 3d scanning

Very badly. I'm very disappointed by Tango in this regard.

------
cocktailpeanuts
Looks like a lot of people on this thread think Google's goal is to beat Apple
with the features, but in my opinion that's not the case.

Google really has nothing to lose by following iOS lead, it's good that they
"gave up" on Tango and decided to follow ARKit because that means Google is
not trying to beat iOS with Android, but trying to commoditize iOS.

You really can't beat Apple at its own game, it's best to let go of that
foolish goal and focus on trying to nullify whatever leverage Apple has with
their few years lead.

Sure ARCore won't be installed on a lot of devices now, but in a couple of
years they probably will (This is not the same as the Android ecosystem
currently being fragmented because AR provides an entirely new type of UX and
will be significant enough for people to get a new phone), and as long as
Android gets there Google will have achieved its goal--commoditize AR.

In the end, Apple will have made tons of money with their iDevices, Google
will NOT have, but they will have gained enough AR user-base that they can use
it as their leverage, everybody wins.

------
iainmerrick
It's funny how much marketing speak these big companies feel obliged to cram
in. "At Android scale" -> "to catch up with Apple's ARKit"

It's actually impressive that Google is able to change direction and and get
this software-only AR out the door so quickly to compete with Apple, but they
still don't want to admit that's what they're doing.

~~~
cromwellian
Maybe having been developing an advanced AR platform since 2014 has something
to do with the speed at which they can carve out and subset a "light" AR
experience :)

------
rsp1984
Also it looks like Google is retiring the "Tango" brand [1].

[https://techcrunch.com/2017/08/29/google-retires-the-
tango-b...](https://techcrunch.com/2017/08/29/google-retires-the-tango-brand-
as-its-smartphone-ar-ambitions-move-wider/)

~~~
bhouston
Tango has technically been a failure in terms of the specific AR hardware. No
adoption and no real software support.

Apple's purchase of MetaIO and its focus on just SLAM is really the right way
to go. Maybe improve it a bit via specialized hardware when available
(progressive enhancement in a way), but at least start with SLAM.

Google didn't have to be behind on AR at this point in time if they had
ditched the focus on Tango hardware and instead focused on SLAM.

But that is water under the bridge, Google is now on the right track after
being forced to do so by Apple.

~~~
cromwellian
I think ARKit-style SLAM will turn out to be a fad, they'll be a lot of
interesting toy apps, but AR without environmental understanding or
persistence I don't think offers much beyond that. The basic ARKit demos we've
seen are the same stuff we've been seeing for a while now demoed with third
party libraries.

Including depth sensing HW _was_ the right solution, but Google doesn't have
its own popular smartphone as a forcing function. I predict eventually Apple
will include a depth camera, or they'll use dual-cameras to try and synthesize
it, and once that happens, then all Android manufacturers will follow suite.

If AR is to be useful, it's got to be a lot better at tracking and drift, at
making sense of the world, of supporting occlusion and mapping.

~~~
MBCook
ARKit seems like the near term future. It's useful and no longer requires as
much expertise since is available in the OS and is well documented. For a
software library it's surprisingly accurate.

The Tango model of using extra hardware is probably much better, but seems
further ahead. The software model works today on existing devices and lets
people see how this is useful. Once you have that convincing people it's with
the money to have the hardware added to the phone becomes easier.

Given how many Android phones are lower cost than the flagship was Tango ever
going to be very popular? Apple could have forced the issue (like Lightning or
the headphone jack) but people could always switch Android OEMs to get
something cheaper if they don't see the value.

~~~
dragonwriter
> but people could always switch Android OEMs to get something cheaper if they
> don't see the value.

The kind of people that buy flagship Android phones would probably either see
the value or be price insensitive enough not to switch over it.

OTOH, “works ok now” is often more important than “works better later”, so
getting something out that will work with today's flagships has value even if
Tango would be practical down the road.

~~~
cromwellian
ARKit is a perfect example of "worse is better". Quite obviously inferior to
the full Tango demos with occlusion and room mapping, and HoloLens, but simple
enough to excite the imagination, and to enable "fake AR" experiences like
Pokemon Go.

~~~
MBCook
That's what I was trying to get at. Once people get a taste I think the demand
for More capable solutions like Tango will be much higher than it would have
been otherwise.

When Pokémon Go came out I was very disappointed to find it's much hyped 'AR'
was really just rendering ok top of a live camera image with no tracking at
all.

The demo of the ARKit version from WWDC is what I had been expecting.

------
AndrewKemendo
This is something I have been personally pushing the Google AR team on for at
least a year and well before ARKit came out. I'm glad to see that ARKit made
them actually move on this.

Google had been dead set on pushing Tango hardware to OEMs in the hopes that
they would be able to lower BOM on the hardware. Everyone in who has been in
AR long enough knew that wasn't going to happen and that monocular SLAM in
software was the way forward on mobile.

The key thing now for AR devs is that they will have fairly comparable
monoSLAM capabilities available on both Android and iOS for their apps.

HOWEVER that just means that the tracking portion of the equation is solved
for developers. A few years ago it was possible to make a cross platform
monoSLAM app if you used a handful of tools like Kudan or Metaio. Obviously
ARKit and ARCore are going to be more robust with better longevity, however
the failure of uptake of AR apps was not because of poor tracking, it was
because there is an inherent lack of stickiness with AR use cases on mobile.
That is, they are good for short infrequent interactions, but rarely will you
need to use the SLAM capabilities of an AR app everyday or even multiple times
a week.

This is why I am so invested in WebAR, because you can deploy an AR capability
outside of a native app and the infrequent use means it can have longevity and
a wider variety of users.

Yes, for those apps that people use all the time it will be very valuable, but
if you look at the daily driver apps like FB, IG, Snap etc... they are already
building the AR ecosystems into their own SLAM. All this does is lower
overhead for them. For the average developer it doesn't solve the biggest
problems in AR.

Kudos to Google, but developers need to really understand the AR use cases,
implementations and UX if they want to use these to good effect.

------
wyldfire
Wow, this Dance Tonite [1] [2] thing looks pretty interesting.

[1] [https://tonite.dance/](https://tonite.dance/)

[2] [https://www.blog.google/products/google-vr/dance-tonite-
ever...](https://www.blog.google/products/google-vr/dance-tonite-ever-
changing-vr-collaboration-lcd-soundsystem-and-fans/)

------
leoharsha2
Even with ARCore and the new ML system in Oreo, Google can’t match iOS, due to
the install base of Oreo being nothing now, and won’t be over 20% for another
2 years. Apple's ARkit is going to bring a whole new swath of exclusive apps
to iOS. These APIs currently can’t be recreated on android, which means most
apps wont be able to be ported with all features, if at all. It’s becoming
harder and harder for devs to be cross platform and Google is falling behind
Apple.

~~~
cromwellian
I see it as precisely the opposite. It seems like the tendency to engage in
platform wars obscures the larger issue that this is all going to settle down
and converge over time and nothing Google or Apple is doing right now will be
the final form of AR.

Remember early 3D in the 90s? We have S3 Virge VX, Voodoo 3dfx, PowerVR,
Rendition Verite, Matrox, TNT, etc They had a huge disparity in capabilities,
fillrates, APIs, most didn't support OpenGL, even 3dfx -- the card closest to
what games settled on as a minimum set of functionality, only supported
Carmack's miniGL. Early DirectDraw and Direct3D were horrendous and to get
performance, Games had to be ported to each card's proprietary APIs, and
effectively, Quake and Unreal became the Unity of their day, offering a higher
level abstraction to building cross platform titles until the cards all
converged on OpenGL.

And converge they did. Eventually most cards offered similar fillrate,
multitexturing, and fixed pipeline options, the market settled on a common
hardware featureset, and then competed on price and performance.

Later, programmable shaders disrupted the market again, and we went though
iterations of pixel/vertex shaders from 1.0/1.1/1.2/1.3/1.4 to 2.0 to 3.0 and
then GLSL and finally something like CUDA.

I think we're going to see the same thing happen in mobile and whatever
fanboys propose as some kind of insurmountable advantage will turn out to get
commodified if it becomes successful. For example, if AR takes off, or if
Apple adds a depth sensor and Tango-like functionality takes off and a huge
startup market and VC funding coalesces around it, then roughly 1-2 years
later, every Asian OEM will have Android devices with depth cameras and
similar functionality.

The only reason for the discrepancy today is the hardware fragmentation. But
the market follows the money and abhors a vacuum. Hardware convergence in
capabilities always follows, and eventually developers end up with middleware
to address it.

This does lead to "IOS first" for startups, but if you look at the App Store
and Play Store today, practically every major game and app you want is
available on both platforms. It'll take years for this to shake out, but if AR
becomes huge, smartphones in 5 years will all have roughly a similar set of
features.

P.S. My own opinion is that phone's viewport is too small for a great AR
experience. It's a nice initial experience and visually impressive, but will
quickly become tiring. The long term form of this has to be some form of
glasses, because waving around a phone in all directions and holding it in
midair while touching the UI is kind of awkward.

~~~
leoharsha2
> My own opinion is that phone's viewport is too small for a great AR
> experience. It's a nice initial experience and visually impressive, but will
> quickly become tiring. The long term form of this has to be some form of
> glasses, because waving around a phone in all directions and holding it in
> midair while touching the UI is kind of awkward.

I'm 100% certain that's what Apple is preparing for. AR in a phone is a neat
toy, a gimmick. The most perfect AR toolkit ever made still won't change the
fact that you're holding a phone in your hand, interacting with it through a
screen, etc.

~~~
MBCook
I think you're dead on.

A number of people have speculated that this is an attempt to get some real
world trial and have apps that are already ready so if they announce some sort
of HoloLens thing in the future the software/devs are already 80% there.

------
pier25
Apple and now Google are making it easier to produce AR apps, but the tech has
been there for years. I made my first AR demo some 8 years ago for a big event
I was working with (on a laptop).

IMO AR in smartphones and tablets is a fad that in 2 years nobody will care
about. Remember all those gyroscope/accelerometer based games? Yeah me
neither.

Maybe AR will be awesome when someone (Apple? Microsoft?) releases a pair of
lightweight glasses that can produce stereoscopic images superimposed
seamlessly over reality, but we are still very, very far away from that.

~~~
joezydeco
But AR glasses/implants will be the evolution of smartphones. There won't be a
quantum leap from phone SoC to glasses SoC. Google Glass was an early
demonstration of that.

So if one needs to evolve AR hardware from phones to glasses, then putting it
on the phones is a prudent next step, isn't it?

~~~
pier25
> So if one needs to evolve AR hardware from phones to glasses, then putting
> it on the phones is a prudent next step, isn't it?

Time will tell, but if we are let's say 10 years away from good AR glasses
what difference does it make if smartphones of today can display AR content?

Obviously Apple (and now Google) are fighting in the marketing space, not
technical one.

In truth the problem is really hardware not software.

~~~
dharma1
I don't think it will take that long to have lightweight additive projection
glasses with some sort of camera/depth sensor and eye tracking.

The processing power or battery for that form factor will not be there for
another 10 years - but offloading processing and power supply to the phone in
your pocket via well designed tethering is conceivable.

I think we'll have this in a couple of years, and you'll control it via voice
recognition and hand gestures.

------
bhouston
Sweet! This is amazing and I was hoping this was going to happen sooner rather
than later.

How long until they update the ChromiumAR project with support for ARCore and
when will that preview and then be available? I know that tons of people are
waiting on that:

[https://github.com/googlevr/chromium-
webar](https://github.com/googlevr/chromium-webar)

~~~
kakali
Is this what you're looking for?
[https://developers.google.com/ar/develop/web/getting-
started](https://developers.google.com/ar/develop/web/getting-started)

~~~
bhouston
Thank you! I love Google for making the web a top priority on par with
Andriod, where as with Apple it is an unloved step-child.

------
abhisuri97
If someone can make a react native binding for both ARCore and ARKit, that'd
be super amazing and make the bar to entry for AR apps much lower.

~~~
divbit
I will attempt to make a shitty, hacked-together version :)

~~~
monkmartinez
That's the spirit! hahaha

------
aylmao
Now, someone just needs to build an AR library that abstracts the
functionality of these two through a common API.

~~~
moron4hire
I would be very surprised if Unity didn't have it up and running in one or two
months. They already support ARKit, Windows Holographic, and Vuforia more or
less natively. Also, given the ground-level work they've done to enable
support for VR without directly dealing with vendor-specific plugins, adding
just one more is probably not that big of a deal.

~~~
EddieRingle
This looks to ship with Unity support, as well as Unreal.

~~~
Ologn
Yes.

There is a lot of discussion here about how Apple has a headstart on developer
commitment with ARkit.

What will actually happen for the majority of games targeting AR is people
will write it in Unity (or perhaps Unreal), and then set it to compile for iOS
and Android.

The S8 was the top selling Android phone this year, so this can be rolled out
immediately to phones. I just tested out the sample app on my Pixel. As time
goes on, the percentage of Android phones with this capability will increase.

ARkit does not work on iPhone 6 or earlier, or the iPad Air 2 or earlier. It
can roll out to a greater percentage of Apple devices right now, but Android
has a larger overall market share any how. Two years from now, I expect AR on
iOS and Android to be fairly on par (of course we have to see how the two
stacks measure up against one another).

~~~
thenomad
Where did you find the sample app? I've been looking for it with no luck.

~~~
Ologn
I compiled it. I followed the instructions here -

[https://developers.google.com/ar/develop/java/getting-
starte...](https://developers.google.com/ar/develop/java/getting-started)

Actually I followed them somewhat - I never opened Android Studio, I did it
all on the command line.

Note you have to install two APK's, the ARcore service APK you have to
download from them, plus the one you're compiling with gradle or AS.

------
dep_b
Very interesting. Just paging through the docs the library doesn't seem like a
very hard to use at all. The devil might be in the details and it's hard to
say how rushed it was after ARKit but they had the parts and bits required for
it already done in some form or another.

------
forkLding
Is there a difference in the core technology underneath ARCore and ARKit? Just
generally curious.

~~~
bangonkeyboard
ARCore is based on Tango which was derived from Flyby which was acquired by
Apple and turned into ARKit.

~~~
bhouston
I thought ARKit was derived from metaio's technology?
[https://techcrunch.com/2015/05/28/apple-
metaio/](https://techcrunch.com/2015/05/28/apple-metaio/)

~~~
bangonkeyboard
Informed conjecture from [https://medium.com/super-ventures-blog/why-is-arkit-
better-t...](https://medium.com/super-ventures-blog/why-is-arkit-better-than-
the-alternatives-af8871889d6a):

    
    
      Ogmento was founded by my Super Ventures partner Ori Inbar. Ogmento became 
      FlyBy and the team there successfully built a VIO system on IOS leveraging an 
      add-on fish eye camera. This code-base was licenced to Google which became 
      the VIO system for Tango. Apple later bought FlyBy and the same codebase is 
      the core of ARKit VIO.
    
      ...
    
      I don’t have any hard insider confirmation on this, but I think the Metaio 
      codebase would have helped with the plane detection and probably helped with 
      the mapping/relocalization pieces of the visual tracker. FlyBy had by far the 
      best Inertial tracker on the market, and it’s this piece that makes ARKit 
      magic (instant stereo convergence & metric scale in particular).

------
TeeWEE
I ran (the sample app) it on my galaxy s8 and its a bit slow sometimes, but it
tracks tables well. Floor not so..

Anybody knows where i can find more apk sample apps to test?

~~~
drcode
Hi, where is this sample app you tried? I don't see it listed in the OP
announcement anywhere...

~~~
tacomonstrous
It can be compiled from the linked GitHub repository.

------
keredson
Also check out: [https://venturebeat.com/2017/08/28/8th-wall-
raises-2-4-milli...](https://venturebeat.com/2017/08/28/8th-wall-
raises-2-4-million-for-augmented-reality-tools/)

Supports Unity, and works on both iOS and Android out of the box. (I'm not
affiliated, just a supporter.)

~~~
skue
Very curious how they pitched this to investors... “We’re building a platform
geared to low end devices that will become obsolete within a couple of years.
Invest now, and be part of our team’s amazing journey towards acquihire!”

~~~
leohart
LOL. My guess would be "All ARKit and ARCore bases are belong to us". 8th Wall
XR works on all the ARKit and ARCore devices. Why would you want to create an
AR app twice when you can create it once? For free.

~~~
s73ver_
But why wouldn't I use something proven, like Unreal or Unity in that case?

------
hammerandtongs
I'm glad to see this, I'll enjoy experimenting (probably via the aframe ar
api) BUT

What are the useful applications for AR outside of verticals?

I've not seen anything compelling in the phone only incarnation.

The headsets have a lot of engineering issues ie many years to overcome.

Even with headsets its unclear the value of adding the visual clutter and
noise that most ambient/immersive computing demonstrations seem to assume.

Whatever value you can add generally requires constant headset wear for it to
be ready to hand. This puts even harder engineering problems on the industry
as it forces super light and easy headsets (google glass was not AR nor a
technical path to it).

Not seeing it yet.

~~~
dmitriid
Headsets have nothing (or little) to do with AR. You're probably confusing AR
and VR.

There are tons of use cases for AR:

\- [https://storify.com/lukew/what-would-augment-
reality](https://storify.com/lukew/what-would-augment-reality)

\- [http://www.madewitharkit.com/ideas](http://www.madewitharkit.com/ideas)
and their twitter
[https://twitter.com/madewitharkit](https://twitter.com/madewitharkit)

~~~
hammerandtongs
Having seen most of those proposed in one form or another in the past my
response is about the same.

Can't see any of them being worth putting a headset on.

Can't see any of them being worth launching an app on a phone to stare through
a camera at.

Some of them need some pretty next level ai.

~~~
criddell
The heads-up display on a car windshield seems pretty useful.

The rest are pretty underwhelming. Lots of things that make great demos, but
not many that feel like I would come back to them.

------
hellofunk
Do I understand correctly that one big difference between AR on Android vs iOS
is that the next iPhone will have advanced 3d sensing abilities that are
currently years away on Android phones?

------
euyyn
It can be seen from the video (from the way they avoid it) that the
"augmentation" is always superimposed on the "reality". I.e. someone can't
walk in front of the virtual objects you put on the table.

Is that a limitation of ARKit too?

What would it take to make it "real 3D"?

~~~
dragonwriter
> It can be seen from the video (from the way they avoid it) that the
> "augmentation" is always superimposed on the "reality".

I think you mean “inferred” rather than “seen”, if it is an assumption based
on avoidance, and there are other explanations; while HoloLens is better
equipped than phone-holder software AR to avoid this, the one time I did get
to use one there were some glitches when the “augmentation” should be obscured
y the “reality”. If ARCore handles that, in principle, but is currently
annoyingly glitchy in practice on its current preview-quality state, you might
reasonably avoid it in demos.

~~~
euyyn
In live demos yes, but you would totally want to show it in a pre-recorded
demo like this one.

~~~
dragonwriter
Not if it was glitchy enough that you couldn't reliably get a good take.

Either “doesn’t have that feature” or “feature is currently in poor state to
demonstrate” are reasons to avoid demoing it.

~~~
euyyn
You don't need to reliably get a good take to produce a video; you need one
good take.

If it's bad enough that it won't look right even after taking the best of a
large number of attempts, that's as good as the feature not existing for the
purpose of my question.

------
rsp1984
I'm a bit skeptical about the performance to be honest: Great tracking for AR
requires careful selection and tuning of cameras and IMUs (inertial
measurement units -- essentially MEMS gyro + accelerometer).

Apple has very tight control over their components so they can do this but
managing this across a million OEMs and device models (as it is with the
Android ecosystem) is close to impossible.

Tango tried to solve the problem by specifying out a software and hardware
stack for OEMs to use but now it looks like Google is just too jealous to let
have Apple have a good time with ARkit, therefore the "me too".

~~~
cromwellian
Hence why it is available only for Pixel (which Google controls) and the
Samsung Galaxy 8 (a large partner) because it can be precisely calibrated.

Why is it Google "me too"? Tango was released in 2014.The basic plane
detection functionality that's in Tango is derived from the same mechanism
that Apple uses. Facebook released an ARKit-like library at their conference
before ARKit was even announced.

When Apple is late to the party, it seems people say "it doesn't matter if
you're first, Apple waits till its 'ready'", but when Apple is perceived to
have done something first, suddenly everyone accuses Apple's competitors of
being thieves and copying.

~~~
eridius
> _When Apple is late to the party, it seems people say "it doesn't matter if
> you're first, Apple waits till its 'ready'"_

Generally people say that in response to everyone accusing Apple of copying.

~~~
cromwellian
That's mostly in reaction to Apple's history of claiming copying and its
litigious look and feel lawsuits.

Remember "Redmond Start Your Copiers!" That was an official WWDC banner hung
from the rafters by Apple, not some fanboys. Steve Jobs frequently gave
interviews accusing rivals of copying, and then copying with the excuse "Good
artists copy; great artists steal". All of those years of going on the
offensive against everyone, has created a tendency of the other side to look
for hypocrisy now.

There's something pretentious, and deeply hubristic and lacking in humility in
Apple's marketing that I think has fanned the flames of these fanboy wars. In
a way, their marketing reminds of the way Trump talks, only with a larger
vocabulary. Replace "Great!", "Bigly!", "SAD!", with "Beautiful", "Amazing",
"Breakthrough". It's just continuous repetitive of superlatives, even for
minor features.

~~~
eridius
Your quotation I think is very important here.

> _" Good artists copy; great artists steal"_

The point of this quotation is that "good artists" merely copy other people,
which is what Apple is talking about when they hang banners like "Redmond
Start Your Copiers", but "great artists" 'steal', which means they take the
good idea and transform it to make it better. And this is very much how Apple
works, and this comes back to your original pseudo-quotation, _" it doesn't
matter if you're first, Apple waits till its 'ready'"_. Apple doesn't just
blindly copy what other people are doing. They 'steal' the ideas and turn them
into something great before releasing them.

~~~
cromwellian
Apple has done both, as had Microsoft. Microsoft didn't just blindly copy
everything Apple did. As much as I hate Microsoft of the 90s, they did
innovate too.

Apple doesn't blindly copy? What do you call Apple Music then? Ping? What did
Apple do to innovate in that space above and beyond Spotify? That's a long
list of UI features Apple cribbed from Android, WebOS, and Windows Phone that
ended up (IMHO), inferior copies, where the copy didn't actually improve on
(make great) the original.

In what way does Apple Photos improve on the quality of Google Photo's deep
learning based categorization that is user-visible and noticeable?

~~~
eridius
> _Microsoft didn 't just blindly copy everything Apple did._

Not everything. "Redmond Start Your Copiers" was in response to some very
specific copying over the previous year, though I don't remember the details
anymore.

> _What do you call Apple Music then?_

I call it a subscription model to the iTunes store. Subscription models have
been around for a long time, even though Spotify is the poster child for
applying them to Music I don't think it makes sense to say Apple is copying
Spotify (or anyone else) by having a subscription music service, it's the
logical evolution of paid music services. You could certainly argue that
Spotify proved the customer demand was there (as well as the ability to
convince the labels to go along with this), but it's not like Spotify invented
the concept.

> _Ping?_

An unmitigated disaster. But I'm not sure who you think that was copying. I
can't think of any pre-existing service like Ping.

> _What did Apple do to innovate in that space above and beyond Spotify?_

Provide a seamless "it just works" experience across all Apple devices,
including integration into their customers' existing iTunes libraries, and
into Siri, including the forthcoming Homepod. And I think it's fair to give
Apple Music credit for iCloud Music Library as well, which is great.

> _That 's a long list of UI features Apple cribbed from Android, WebOS, and
> Windows Phone that ended up (IMHO), inferior copies, where the copy didn't
> actually improve on (make great) the original._

Can you elaborate?

> _In what way does Apple Photos improve on the quality of Google Photo 's
> deep learning based categorization that is user-visible and noticeable?_

Apple Photos does it all on-device.

Also, I really don't think you can claim that using machine learning to
classify photos is something that Google owns. It's been an obvious idea for
literally decades, it's just taken until now before it was feasible to do.

~~~
euyyn
So when Apple merely copies instead of "stealing", it was an obvious idea that
had been around for a long time. It's the logical evolution to what they
already had. It just wasn't feasible to do until the moment Apple copied it.
Got it.

~~~
eridius
Your sarcasm is not appreciated nor warranted. If you disagree with any of the
specific cases I talked about, feel free to tell me why I'm wrong, but merely
being sarcastic about it isn't helpful.

~~~
euyyn
Ok I'll bite:

\- If something wasn't feasible to do until the moment Apple copied it from
others, why didn't Apple do it first?

\- Have there been innovations by companies that compete with Apple?

\- Have they ever taken an idea from Apple and made it better?

~~~
eridius
> _If something wasn 't feasible to do until the moment Apple copied it from
> others, why didn't Apple do it first?_

I really don't know what you mean by this.

> _Have there been innovations by companies that compete with Apple?_

Where is this line of questioning going? I feel like you're trying to accuse
me of saying that nobody but Apple is capable of innovation, which is
nonsense.

> _Have they ever taken an idea from Apple and made it better?_

I'm sure someone has. I don't really keep track of that sort of thing though,
so I don't have any examples off the top of my head.

~~~
euyyn
> > If something wasn't feasible to do until the moment Apple copied it from
> others, why didn't Apple do it first?

> I really don't know what you mean by this.

You've answered to Apple copying others with "it was an obvious step forward,
just infeasible before". Which fails to explain why was it infeasible for
Apple until after it became feasible for others.

> I feel like you're trying to accuse me of saying that nobody but Apple is
> capable of innovation, which is nonsense.

It is a strong predictor of how you responded to the examples that were
raised. The obvious question after seeing that is "is there a counterexample?"

~~~
eridius
I said automatically classifying users' photos to search was only feasible to
do recently. It didn't become feasible for others first, it became feasible
for everyone at about the same time (well, I suppose it was feasible for
Google slightly earlier because Google's doing it in the cloud where they have
more computing power available, versus Apple being limited by the computing
power of the iPhone, but this appears to be such a relatively small difference
that it doesn't really matter).

~~~
euyyn
It was only feasible to do recently, in the sense that only recently did
Google develop (and publish) the ML techniques that made it feasible. It
wasn't some inevitability brought to us by Moore's law.

~~~
eridius
This isn't my area of expertise, but I thought the recent ML boom was kicked
off by ImageNet, which came from the CS department at Princeton, and the
ImageNet Large Scale Visual Recognition Challenge? Google's been a big player
in ML recently, but they're not the only ones researching this.

~~~
euyyn
ImageNet is a manually-annotated database to train models. Google had the
Image Search corpus internally too.

The advances in image and speech recognition of these times are due to
innovations in deep learning by Google, Microsoft, NVidia, Stanford, U.
Toronto, CMU, and many others. I didn't mean to imply it was all Google's
doing. Rather that Apple wasn't there, which is why "just waiting for it to
become feasible" sounds like apologetics.

Then there's also innovation in making a product out of the new capabilities,
or integrating them into an existing product. I think dismissing that as
"everybody was just waiting for the technology" is not realizing that, in
hindsight, all products look obvious.

------
yohann305
Google took the names of the 2 best features of iOS 11 and combined them:
ARKit + coreML = ARCore

Anyone else here got it?!

------
designcode
Exciting to see competition here

------
mempko
My real table is messy enough. Now I can make it messy digitally too!

------
dharma1
Will this work on both Qualcomm and Exynos variants of the S8?

------
rubatuga
What’s with the majority of the shots being crop shots, or shots that don’t
involve the object moving completely in or out of the frame? Seems to me like
it’s potentially hiding some visual defects

------
solotronics
I can't wait to dance with a hotdog.

------
bozoUser
How does this compare to apple`s ARKit?

~~~
MBCook
Hopefully someone will make a good write up.

From seeing ARKit examples that people have posted to Twitter the thing that
has impressed me the most is the ability of ARKit to track your position even
if you turn around and walk around, even all around in the office. I hope
Google's version can do that as well because it seems like it would enable
some really fun activities.

------
nkkollaw
I don't get if there's app one can download to try this?

Doesn't look like it, uh?

~~~
mikeevans
There's a sample you can download and run on compatible hardware:
[https://github.com/google-ar/arcore-android-
sdk/tree/master/...](https://github.com/google-ar/arcore-android-
sdk/tree/master/samples/java_arcore_hello_ar)

~~~
nkkollaw
Thanks.

------
appimonster
Google's ARKit

------
0xbear
I'd rather they rewrote Camera2 API, which is the most horrible API I've seen
in my 20+ years in this profession. It's so bad, one might think it's an
elaborate prank, but no, Google does expect you to use it to interact with
cameras. That's why all photo apps on Android are so ridiculously bad compared
to iOS.

~~~
RivieraKid
Yep, it's horrible. Activity + Fragment API is pretty bad too. And the worst
was the first version of in-app billing or GCM (don't remember which one) API.
You had to copy hundreds lines of code for a hello world.

------
komali2
Haha, check out the commits on their github for three.ar.js:
[https://github.com/google-
ar/three.ar.js/commits/master](https://github.com/google-
ar/three.ar.js/commits/master)

> Build and increment to 0.1.1

> jsantell committed 26 minutes ago (failed)

....

> Fix linting

> jsantell committed 24 minutes ago (success)

edit: aww come on folks it's all in good fun

~~~
jsantell
release days are fun

~~~
LeoNatan25
"Nazi" linting rules are not.

