Hacker News new | comments | show | ask | jobs | submit login
How ARKit 2 works, and why Apple is so focused on AR (arstechnica.com)
93 points by awat 32 days ago | hide | past | web | favorite | 86 comments



AR I think needs killer apps and utilities. The AR measuring app is a good example if it works accurately.

Other AR apps/ideas I think would be killer are...

- AR historic view: wave phone or point glasses at a location and see how it looked year’s ago via pics & a 3D model or maybe a mix. Make me feel I stepped back in time and have a Disney like feel/experience to it.

- AR night time/day time view: At nighttime view in glasses how it looks during the day

- AR Linkedin/Facebook: Know the strangers name your talking to and all their public iNet info via glasses view


This is a few years old but covers pretty much every good AR use case. It's my go-to reference whenever someone asks what the use for AR is.

https://uploadvr.com/augmented-reality-use-cases-list/


Amazing list, thank you for sharing!


Interestingly, it was against the TOS to build the AR Linkedin/Facebook app using the Google Glass API.


It was against the TOS to do pretty much anything with the Google Glass API. They forbid both charging for apps and showing ads. (And then wondered why no one wrote them a killer app.)


I hope Google learned from the Google Glass fiasco, and makes sure their ARCore Terms of Service Agreement strictly forbids posting photos of yourself in the shower.


Just toss a Monero miner in the background :P


Whose terms of service? Google?

Follow up question: Why?


Too creepy.


I'm excited to see your first point articulated by someone else! I prototyped an app just like that for a group assignment when ARKit first came out. Being a Swift dunce I never got it anywhere near market-ready but the idea seems obviously appealing to me. Hopefully someone else puts it together.


I keep thinking: infinite monitors, of arbitrary size, anywhere. But we're gonna need full-on wireless glasses for that.

Phone-based AR seems pretty useless.


Forget wireless glasses, any AR system capable of rendering "infinite monitors of arbitrary size" will require so much processing power and bandwidth that anything wireless will be out of the question for the foreseeable future.


Well, if it can tell exactly where you're looking (and can update fast enough), it only needs to provide enough detail at the exact spot you're looking at.


That would be amazing! But you're right in that we'd need vastly improved technology before we can get there.

I tried oculus desktop with the rift, and all I ended up with was a blurry, unreadable display and a headache. We have a long way to go.


My brother got lenses in Hsi occulus and watches TV inside the virtual theatre a lot. It was also really sharp and nice for me. Wouldn't use it to fully replace my screen but it's really not that bad


> AR Linkedin/Facebook: Know the strangers name your talking to and all their public iNet info via glasses view

A killer app indeed; it'd kill AR. At least I like to think we aren't so far gone that as a society we can't see how wrong that would be.


I am apparently gone because I've always wanted that - in fact it's the only intelligent-assistant application concept I've heard which is appealing. What is so wrong about it?


When you can instantly see the salary data and criminal history of everyone you meet, we can very quickly spiral into dystopia.

It's also likely inevitable. Every time you tag someone in a Facebook photo you are helping this happen.


> we can't see how wrong that would be

I don't think most people will object. Here are some excuses that you can pick from to feel better about our dys... world.

"It only works when someone is near me, and I'd have more things to worry about if someone wished harm on me and was sitting next to me."

"Well privacy is dead anyway, and I get to play candy crush with friends for free! Personal data is worthless anyway."

"Even if they can change who I am given enough personal data, who cares? I'm already shaped by the people around me, what's wrong with adding companies to the mix?"

"China is doing it, and China is ahead of us in everything. We must copy them."

(These are all more or less from actual discussions I've had about privacy.) Take your pick.


Personally I think these ideas are great and it's hard for me to see your post as anything more than hand wringing. I'm guessing you are hinting about something like FindFace?

If so, that train has largely sailed. Same advice as before holds: curate your social media presence and have the guts to turn down contacts that you don't want.


You think people want facebook and linkedin of all things over a new tool? Those are the two social networks with the lowest brand value to end users (not advetisers and recruiters). This seems like it would lead to the demise of public social networks feeding into the AR than killing AR itself. Just gtfo social media with your real name: problem solved.

Obviously this would be a corporate wet dream if both movements continue.


It would be ok if it is voluntary - you can upload your face and choose what information about you is shown, not if someone scans the entire internet to collect as mush information as possible and make an app of that.


Not in regular life, but at a conference it would be great.


But it would ruin all the fun of unknowingly lobbing all your incoherent beginner's ideas at some leading expert of the field that you failed to recognize. Isn't that what conferences are for?


This would be invaluable for people like me with prosopagnosia. Also people with early-stage dementia, I suspect.


Back in old days, we had to wear buttons that said "Hi! I don't remember your name, either!"


The Killer App for AR (and Google Glass) is called iNARcissist, which makes other people talk about you.


Seems like AR is a lot more approachable than VR in that you don't have to strap on so much gear to immerse yourself into a Virtual Based / Enhanced reality. Even Snap seems to be invested in AR if you think about it, how many snaps don't feature some augmented visuals? If people go crazy over that to the point that Facebook, and even iOS steals that type of AR functionality, there must be more to AR than meets the eye.

To a computer geek it may not seem impressive now, but to the average person it casually sneaks up into their everyday life.


It seems as if VR is one of those things where getting to 90% isn't very useful for most people and even if you get to essentially 100% (wireless, usable with mainstream processing power, comfortable lightweight goggles), it may not be all that interesting outside of some niches like certain types of gaming or virtual worlds. A lot of the time people don't want to be immersed.

AR, on the other hand, is very amenable to incremental development. You don't need a Google Glass 2.0 with all the technical and social challenges associated with that. A phone that you can point at something and get information about it is already useful.


I think VR is forever destined to be a niche sideline. Seems like I've been hearing that VR is about to take off every few years since the 90s. It resolutely fails to do so and aside from some "wow" at tech demos and games there's no killer VR app that makes it a must have.

AR is much more instantly obvious to all with things like Night Sky and augmented maps.


In some ways, you could almost argue that VR (and immersive experiences generally) has garnered less interest as it's become closer to reality. The Kinect is gone. The Wii fad is over. The VR devices that do exist have pretty much highlighted the difficulty of creating fully immersive experiences, especially in a typical home environment. And, as you say, there's really been nothing that makes the typical or even not so typical person go "I gotta get some of that VR."

I suppose we may get there someday but it may also be one of those things that it seemed logical to assume people would want (and many thought they did), but they mostly don't.


Seems to me, good AR takes as much or more compute work than good VR. You say "oh it runs on any old phone" but with the exception of a way to strap it to your face, if your phone can do really low latency image tracking then it most likely can do a simple VR scene too.

The proof is in the pudding, isn't it? We have way more VR games than AR apps, don't we?


Definitely. I think Apple really envisions something like this... https://www.youtube.com/watch?v=YJg02ivYzSs ...being the future, and they want to get ahead of it.


I think that video is more of a warning about the dangers of cognitive overload, cybersecurity, and estrangement.

It looks impressive whilst also being oppressive.


>It looks impressive whilst also being oppressive.

That's an apt description of the technological future in general...


They offer fundamentality different experiences so I'm not sure we should be grouping them together as much as we do. It's like saying "It seems like cars are a lot more practical than boats" - but they're intended for different things.

There might be some cross over in some of the technology, and how you consume it, and the end game may be AR eyewear/contacts that can do proper virtual reality, but for now - they're quite far apart from each other.


You say that but I'm not sure that's empirically true or just your own personal anecdote. Mainstream sub-$200 VR headsets exists today and people use them daily. AR is fun, but you I can't point to AR actually selling anything.

Even Animoji is a special case for apple. They had the face unlock tracking so this kind of face tracking feature makes sense. It's not general purpose AR though. Does Apple have any official app features using AR that isn't face tracking?

Apple is hyping AR more because they don't have any VR to speak of but in terms of what's actually released and what people are using I could see an argument that VR comes out ahead in that comparison.


I can't point to AR actually selling anything

Maybe you personally can't, but that doesn't mean it doesn't. Our company increased a client's purchase rate, A/B tested over 100s of thousands of AR user instances, consistently by 1.89% in a sales cycle. So yes it works.

Where implemented well, there is a measurable shift in user behavior.

Apple is hyping AR more because they don't have any VR to speak of

No. It's because they want to own the AR mapping and content landscape which are both required for glasses to work.


Youre right I personally can't name it but I'd be interested to know what app has 100s of thousands of AR users browsing products. What app would this be?

>No. It's because they want to own the AR mapping and content landscape which are both required for glasses to work.

This also makes no sense. They don't own the text or image content creation landscape and they still make computers and browsers. AR already exist (although they're quite expensive and cumbersome) so a new proprietary asset format might be nice but its not a requirement. What are you saying here? Just that they're pushing AR to force adoption of USDZ?


To the first question, I'm under NDA for the company we worked for but it's a F50 retailer.

Other than them, Ikea, Amazon, Wayfair and a handful of other e-retailers with MM+ users have AR integrated into their apps with comparable bumps.

This also makes no sense.

It makes perfect sense if you have been an AR developer for 10 years. The simple version is that you need to a 3D "google maps" of sorts, to do something called re-localization before you release glasses. Bootstrapping such a system wouldn't get you very far, literally, because people will be expecting persistent content wherever they go without you - the user - having to insert it. So you have to have a pretty robust mapping corpus to start with. There are a half dozen ways to do this, but the best way is to source and create these maps with mobile/handheld AR systems first.

To the content piece, if you can "save" content to the real world - in context - which is what ARKit2 is now doing with their multi-user re-localization/multiplayer, then you can build out the "real estate" system you need for persistent objects - again, something necessary for AR glasses to "just work."

For more you can watch my talk on this from a few years ago I did remote for UMASS EE graduate students:

https://www.youtube.com/watch?v=m_0aoE9nrqM

This is the current land rush in the AR dev world - Apple and Google already own it unfortunately.


The first killer app for AR was probably Word Lens (translating images of words in real time) and that debuted in 2010. Then Pokemon Go 6 years later. I can't think of anything else that has caught on. Seems like a really exciting paradigm, but no one has figured out anything useful to do with it.


Has World Lens ever caught on though? Honest question, I don’t have any data but I have never used it nor seen anyone else using it.


Its now built into google translate (at least on my Android, no idea about iOS). It works offline and I've used it occasionally while staying in Hongkong. Was pretty useful.


I use it a lot when travelling. (In google translate) it’s great.


Google Translate is a great example of how incremental iterations of AR can still be useful. It's on a phone and not some fancy glasses and the translations aren't really very good but it's still really handy especially if you can't even read the script of the language in question.


It's not exactly a blockbuster but it works. I've never seen anyone use it. Then again, I've never seen anyone use Snapchat.


The problem I've always had is that situations where I'd want to use it are also situations where I have limited cellphone connectivity. Maybe it works offline now, but it didn't last time I checked.


It does work offline, if you’ve got the right language pack downloaded.

Very useful indeed. (And on iOS too, someone above asked.)


Even without translation, the ability to read signage aloud is very useful for visually impaired users.


Google bought them a couple years ago and it's now a part of Google Translate.


I used it a lot on holiday. Super useful.


I think the technology is too early to build useful stuff with. Even Word Lens is using the wrong form factor and that surely affects adoption.

As for killer apps ?

Ad blocking . Reading other people's faces and body language. Maybe even an interpersonal assistant/teacher. A better way to externalize memory and achieve a perfect photographic memory. Maybe useful tool for people doing hobbies and DIY, including cooking.

Also, some platforms are useful without a killer app(like browser extensions), to solving many small/niche problems and thus helping people achieve a sense of control.


There are companies out there doing targeted industrial/corporate AR work that are not as visible.

Also, they suffer less from the bulky/inconvenient problems. If AR can save a company a lot of money, they can afford a more powerful and more expensive lightweight setup than a consumer. People are working, not doing recreation, so a slightly bulky and inconvenient headset would be put up with more etc.


Snapchat, for one.

But what you’re describing is an unrealized or early market. That’s an upside - not a negative.


Or a product without a market... which is indeed a negative.


If you want to argue Snapchat doesn’t have a market... go right ahead.


We’re talking about AR’s market, not Snapchat’s market. Snapchat is popular, sure, but is it because of AR or because of something else?


Given the most popular use of Snapchat is AR... and it’s fundamental to their user experience. Yes, yes it’s because of AR.


Not after that update in February, no. I’ve never seen a product lose such a large percent of its user base so fast, and it doesn’t seem to have recovered at all.


I get most excited about VR when I think about the physical ways we currently augment reality with ugly physical objects.

If AR becomes truly ubiquitous, I hope that signs and signals can most fade away. Imagine walking through a city without street signs or advertisements or stoplights, or walking through an art museum without any labels on the wall, or an airport without screens and arrows pointing you around.

I’m also interested (but also concerned) from a consumer perspective—Amazon etc. will offer Shazaam for literally any object. Like her dress? Buy instantly. That house is tinted green because you qualify for a loan that could purchase it, but the newer one next door is tinted red. Follow the glowing footprints around that corner if you want the slice of cake that person is eating in the park.

Most of all, I want to take a walk in the woods and be able to identify any tree, fungus, flower or bird instantly and beautifully. AR will hopefully let us take intelligent guided tours of almost anywhere. Learn about architecture as you walk to work, trace important sites out the window of an airplane onto the ground below, see a bustling Mayan market animated over what is now a set of rectangular fields.


> I hope that signs and signals can most fade away. Imagine walking through a city without street signs or advertisements or stoplights

Will that get us a Black Mirror episode where no door or amenity will function until you put the lenses in? LargeCorp was concerned at the potential loss of trade so lobbied for a city bylaw requiring them. Or perhaps the Netflix Anon movie where all buildings are covered with 100' high virtual ads?

Don't get me wrong I'd love to get back to a built environment usually only seen in older photos where every surface isn't required to be a marketing opportunity. I just suspect somewhere along the way to this beautiful future there'll be a bait and switch. Cue uBlock for Lens(tm).


Given the current assault on advertising as a business model I admit to being optimistic that our society is inching closer to asking corporations to stop manipulating them.


Why do you think that AR will help reduce the amount of advertisements in cities ? Unless someone makes an AR adblocker, advertisers would have no reason to stop bombarding us with ads. If anything, there will be extra AR ads (targeted using data such as your location history or the stores you look at the most often) on top of the existing ones.


The article is about Apple, for example, who is currently attacking ad tech on several fronts.


In that future world it’s going to be pretty tough getting around if you’re homeless and/or can’t afford AR...


We don’t have much in the way of pedestrian navigation right now, apart from crosswalks and such. But you’re right that perhaps crosswalks that cars must yield to (California style) should be the last to go.


They qualify for the free Ad Enhanced™ version.


The only thing I want to be able to do in AR is walk around historical sites like the Colosseum and hold up my phone to see what something looked like millennia ago as I walk around.


What would really take AR to the next level is something like Google glass, where you’re essentially always in AR.

The only downside is that you’d have to wear extra glasses, which might not be ideal if you already wear prescription, and that the technology is way too expensive for most people.

But overall, I don’t think an AR boom is that far away.


Well, if sci-fi is the model of where the tech is going, we'll probably be using contact lenses or implants when the tech "crosses the chasm", so to speak.


It's really interesting Apple has put the overkill face ID sensor in the iPhone. I'm sure future iPad's will have the same too, possibly with even better hardware.

I also believe accurate tracking of the eyes of the user will have tremendous impact on the applications that can exploit AR.

AR is a tremendously promising field and Apple has gone the easy way, to capture the market and developer mind share. Oculus Rifts' and HoloLens' have failed to adequately capture developer mindshare as on date.

Just look at the measure app. It's a tremendously useful app. Though I know it will not have much accuracy, it will satisfy most use cases for normal users.

I am excited about the future of AR apps. I can see a lot of potential applications in Engineering, Medical, Civil surveying, etc.


Not too many AR apps have succeeded, only because the massively-available AR hardware is not here, yet. The phone with a camera is not it. You can only do that much within that paradigm.

The moment (hopefully it's coming) we have (relatively) normal-looking glasses capable to perform like the Magic Leap's headset (haven't tried it - only judging by the publicly available info and investments), that's when handheld devices become obsolete and that's when we will see AR pick-up and win.


You can do a lot with a phone and there's no social stigma associated with walking around with a smartphone. It's an open question for me whether Glassholes 2.0 become a thing even once the tech is there. (Certainly there's a role in niches like doing repairs where information can be overlaid but I'm not convinced it will become a thing when just walking around.)


Viewing the world through your phone's screen/camera lens is just not a great experience.

Having stuff just "appear" is way better. Remains to be seen how this will play out in the short run, but the fantasy is something like smart contact lenses.


Surely it look very cool in demos and on screen but It was just hype and will be just hype until there is major breakthrough because of below reason.

- It is because you can not use AR everytime and results are not consistent and not work in all environment unlinke touch which can be used everytime perfectly.

- Battery and heating problem with AR on phones.

- Intreaction is major problem.

Intreaction with glasses and all those things is very akward and really not comfortable.

I think we have to go long way to have enable the AR.


What I find disappointing is that Apple is leveling the field against independent developers. Was there really a need for an official measurement app, to destroy the market of the ones that are already on the app store? What's the deal with USDZ? It looks like it will be fundamental for developing AR content for apps on iOS, and they give the market away to Adobe and their paid solution, that is not even finished. Sure, it is an open specification so "anybody can compete", but, if Adobe, with all its resources, gets such a head start, how can indies compete? These two things, to me, are signals that Apple intends only the big players to participate, and, as an independent developer, discourage me from investing on AR development on iOS.


Does anyone know of a decent AR headset for someone to hack around with?


Project North Star from Leap Motion seems like an affordable DIY solution if you don't mind getting your hands dirty.

http://blog.leapmotion.com/north-star-open-source/


Could the arkit functions be useful for robotics?


I’m really excited by this stuff but I can’t think of a useful or interesting project to do with it. Any ideas?


Translate foreign signs in place, augment plane tickets with realtime flight details, see products in your house before you buy them... just off top of my head.


I think the concept of “live” paper via AR will be much more popular than most expect. Tickets, like you mentioned, are a great example. Other things could be the ability to flip through different revisions of a printed document or the ability for multiple people to mark up and annotate the same paper document.


I think they missed one of the more valuable potential ones that I'd like to see, and it would be yet another extension of something Apple has made a pretty core selling point: privacy and security, specifically enhancing Face ID with a very natural coercion code type system. If a future Face ID 2.0/3.0 system is capable of fast and wide field enough facial and expression/sentiment recognition, then it could be possible to not merely have a binary lock/unlock based on face and attention, but add in expression/sentiment as something else as well, and give it other levels of transparent unlock. So it'd be possible for users to make a full unlock require a specific expression, or alternately (or additionally) have it recognize another expression and trigger other actions. This could at a minimum include disabling Face ID (similarly to pressing a button 5 times) or even triggering an erase, but even better could also leverage Apple's improvements to their granular data encryption system to "unlock like normal" except that apps and content the user had marked as sensitive would not be decrypted and simply not show up on the system in any way at all.

Of course this could be done with PIN codes too and I hope that would be an addition, but it'd particularly natural using faces and having the system be able to actively look for signs of distress. Apple is in a real position to do this in an extremely user friendly way and it'd be a huge boon for personal privacy and security.

In fact leveraged even farther and using more data points (even just GPS and time of day) I think Apple could make a framework that could go beyond just privacy/security and into helping everyone better handle their usage of mobile devices (which is something else they are clearly at least aware of). Imagine being able to create "views", where underneath granular key release is utilized to allow a user to make any arbitrary set of apps/data be visible or unavailable, and then be able to assign arbitrary trigger conditions for what view the phone shows at any given time. It'd be possible to have a device with personal entertainment, social media, communications and such and also work related material and hobby material and create hard barriers between all them. At work entire or certain times of day entire bits of distraction would simply vanish, totally transparently and without effort, while at home vice versa could happen. While traveling everything but a few key travel apps could go. Users could set it so that financial apps could only be accessed easily at secure geographic locations. All under the control of the owner, and it'd help with self control and information overload, allowing owners to pick the right set of data to take their attention for the right setting. And of course it'd ensure that even if threatened sensitive material could have been made unavailable ahead of time.

I'd love that, I think it'd leverage a lot of Apple's strengths and existing frameworks anyway, and that it'd be a real step forward for getting a handle on the ever increasing amount of stuff being thrown at us.


“How did you accidentally erase your phone?” “I sneezed at the wrong time.”

Apple makes secure devices but doesn’t go for the type of customizable security you’re interested in, I imagine because the number of people compelled to unlock their phone under duress is so small as to be negligible. I hope.


>“How did you accidentally erase your phone?” “I sneezed at the wrong time.”

This seems like an exceptionally and unfairly reductionist, close minded and bad faith reading. I only gave that as one example option, and if you actually consider it it's easy to imagine that there could be restrictions like "You must have Backups on to enable this option". Or it could just not go that far, it wouldn't subtract from the overall feature at all.

>I imagine because the number of people compelled to unlock their phone under duress is so small as to be negligible.

I don't think that's a safe assumption at all, particularly outside the first world (or even in the first world considering how borders are hardening). Apple like anyone (more then many corps in fact given their startup-type org structure) must prioritize features and build up, and considers new things as the world changes and as their foundations continue to improve. They didn't always have FDE at all. They didn't always have the network lockdown to discourage theft. They didn't have biometric auth, until they did. They spent a lot of time and effort building up their HSM implementations, it wasn't there day 1. They are only now in iOS 12 apparently planning to push out a lock down of the wired port. Etc etc.

Finally, you completely ignored how these same measures would fit into another major current public issue and just announced Apple effort: allowing people to get "digital addiction"/information overload under control. There are multiple compelling lines here of which privacy/security are just one angle.

[EDIT] - Edited to add an example of the above from just this past WWDC: HN discussion on "iOS 12 introduces new features to reduce interruptions and manage Screen Time": https://news.ycombinator.com/item?id=17230469

"Views" or however Apple chose to convey/brand it would be a great extension of that. "While I'm doing X, I only want to see Y" with a nice GUI is something that makes intuitive sense, it's how our lives tend to be ordered anyway IRL. GUIs got started by leveraging real world systems and symbols, and while that of course can be overdone and be too restrictive I don't think it's played out yet either at all. Lots of people don't bring work home with them or home to work but rather try to maintain a separation. Why shouldn't our digital devices be capable of automatically supporting that partitioning?


How would you ensure that device owners have 100% control of the policy?

e.g. Apple supports iOS per-app VPNs today, but refuses to make these available to individual users, only corporate users with enterprise MDM. Why can’t individual device owners route all Facebook or banking or gaming traffic through a specific VPN?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: