
Spatial – Collaborate from anywhere in AR - yurisagalov
https://spatial.is
======
zawerf
> Join a Spatial meeting from HoloLens, MagicLeap, VR, PC or Phone

I think supporting cross "platform" is a really cool viral feature.

At first it's just going to be one guy leading the meeting with real full body
tracking (think meetings with architects or mech engineers where you might
want to discuss and move around 3d models)

Anyone else working remotely can facechat in with their phone (w/accelerometer
& gyro or ARKit/ARCore) or skype in on desktop.

But every time they want to discuss something that's outside a non-vr/ar
user's fixed field of view, the dude leading the meeting will have to rotate
the model for them. They will feel left out and eventually will want to buy
their own AR/VR setup.

(or more realistically they will preshare their cad files first ... but the
above still sounds like a plausible future)

------
comex
Looks like a serious attempt at an ambitious concept. I love how the video
showcases what seems to be an actual working prototype, with its capabilities
and imperfections – as opposed to the usual trend of AR 'demo' videos that are
actually just mockups, and completely unrealistic ones at that (as seen with
Google Glass, Pokemon Go, and Magic Leap, just from memory.)

I'm curious about the hardware. Is there a base station or two hiding in a
corner somewhere? If there is, why isn't head tracking accurate enough to
prevent "sway" of virtual objects with respect to real-world ones, which seems
to be visible in the video? But if not, how does it capture the position and
pose of the user's hands? In any case, what kinds of sensors are being used?

Unfortunately, when watching the video, what really stands out is that the 3D
"ghosts" of the other participants have juddery, unsmooth motion. Surely it
couldn't hurt to add a bit of interpolation? It would increase latency, but
not by much, given that a user's view of a different user's body pose is not
especially latency sensitive.

Edit: On second look, it seems like the HMD is just a Magic Leap One, though
that doesn't answer the question of whether there's other hardware in the
room.

~~~
momoro
There's no other hardware required. It's just hololens and magic leap (plus
other devices like iPhone, laptop).

Hololens and Magic leap both have hand tracking, so the device knows if you
are putting your hand up (and has some sense of its position and orientation),
plus head pose.

"Sway" on device is actually more minimal that what you're seeing in the
video.

------
shafyy
I love that people are investing time and money to make this happen. From my
point of view, remote collaboration might make more sense in VR than in AR,
though. AR tech is much harder than VR, and with today's AR headsets this kind
of thing is not really usable. However with today's VR tech, it's fun!

What is the rationale to make this happen in AR here? Is the physical location
important for remote collaboration?

~~~
rjvs
We are implementing a similar concept using VR. To us
[http://stirlinglabs.com](http://stirlinglabs.com) the choice between AR & VR
is simply if the object of the discussion fits into the room or not. For our
customers creating massive structures such as ships, hospitals or airports, a
tiny scale model sitting on a desk using AR isn’t particularly useful or
interesting. They are more interested in switching their existing room for a
space in their new structure.

~~~
shafyy
Checked out your ship demos, that's an amazing use for VR :-)

------
gregmac
It'll be interesting to see how this feels for collaboration compared to
current video chat. It clearly has the advantage of allowing people to
'interact' with things in the space, but you lose the feedback aspect of
people's facial expressions.

With video, especially in a group, you can see if people are following what
you say, nodding or shaking heads, have confused looks, or are not engaged (or
falling asleep). Compared to voice-only discussions, for example, sometimes
people won't ask the question if they're confused, presumably thinking they're
the only one and don't want to waste others time and/or embarrass themselves.

Maybe facial expressions can be emulated -- but there's a very big uncanny
valley to get over to make that usable.

~~~
neolander
I think the initial use case of this will be with everyone’s videos in
rectangles floating in space. People are already comfortable with video. It’s
much more natural to glance at a shared workspace and then up at them, than it
is in a current call where their video gets covered by another window on your
screen.

~~~
gregmac
That doesn't work well if everyone is constantly turning away from the camera,
not to mention has an AR headset covering most of their face.

------
dang
There's a writeup here: [https://techcrunch.com/2018/10/24/spatial-
raises-8-million-t...](https://techcrunch.com/2018/10/24/spatial-
raises-8-million-to-bring-augmented-reality-to-your-office-life/) (via
[https://news.ycombinator.com/item?id=18292217](https://news.ycombinator.com/item?id=18292217)).

------
pj_mukh
We've been experimenting with using VR for remote meetings and it really does
work in making physical presence more relevant. I think Spatial is really onto
something but they can simplify the product even more. My main "immediate" use
cases are actually

a) meetings while taking notes or dealing with Trello and,

b) pair programming,

So, all I want is to stream my desktop and some aspects of my physical self
(face and hand-movements are enough).There is no real need to bring in aspects
of individual desktop apps into immersion (as they show with notes or 3D
design tools), just stream my desktop and let everyone see. For pair
programming, this would need to work seamlessly hours at a time.

P.S: Incidentally AR vs. VR is less of an issue here, however, if it was real
MR (i.e. I can see my real laptop screen exactly where it is) and I don't have
to stream my own laptop that would be great.

------
Animats
Has anyone actually seen this? The only video available is clearly a fake
demo. Remember Magic Leap.

~~~
momoro
The video is real. It’s a recording of the actual software being used live.

Things that are different irl vs the video:

1: Experience irl is way cooler. We use it internally for meetings. It’s
amazing to take a photo on your phone and see it show up in the air in front
if you. It’s awesome to see someone rez in and start walking around. It’s fun
to put your hands up, start talking, and watch 3d models and images show up.
The playfulness and interactivity is much more exciting in person than in the
video.

2: The field of view on existing AR hardware is limited. Everyone working in
AR hears this to the point of exhaustion, but anyone trying AR for the first
time can’t help but notice it. So, the experience is different than in the
video because you can only see AR through a smallish view. This will, of
course, improve soon.

3: The video ui isn’t rendered additively and has some added zing that is less
performant when using in real life (eg higher antialiasing)

------
stcredzero
A powerful use of low cost AR/VR, which I don't see a lot of people pursuing,
would be to 3D render building plans while displaying feeds from smartphone
cameras as "viewports" inside the rendered building. This could be used to
very quickly get information from people onsite to experts and decision
makers, with more easily displayed and digested contextual information. This
could also be used in the building of large machines as well.

A lot of the challenge in this sort of technical communication is conveying
the Point of View of the person onsite. We have the technology now, so why not
just render it?

~~~
roymurdock
There are companies that have been doing this for a while. I think you are
slightly overestimating the capabilities of the technology, but largely
overestimating the demand for this stuff over the way things are traditionally
designed/built/inspected. It's not a killer use case that is forcing
architecture/design/manufacturing firms to adopt the tech or die...it's
currently still in gimmick stage, but is getting better slowly but surely

~~~
stcredzero
_There are companies that have been doing this for a while._

Can you provide links? I'd like to see what they're up to.

 _I think you are slightly overestimating the capabilities of the technology,_

No. I know how janky smartphone GPS is from personal experience, especially
inside a structure. Big companies with deep pockets are working on Vernor
Vinge's "localizers," however.

 _but largely overestimating the demand for this stuff over the way things are
traditionally designed /built/inspected. It's not a killer use case that is
forcing architecture/design/manufacturing firms to adopt the tech or
die...it's currently still in gimmick stage, but is getting better slowly but
surely_

As you are implying, the demand is closely related to the jankyness event
horizon. It's just like smartphones and tablets. They existed many years
before the iPhone and iPad. However, prior to the iPhone and iPad, only
propeller-heads wanted those things. There's a point at which the technology
has matured to the point where it doesn't get in the way and it's actually
nice to use. At that point, it will explode.

~~~
roymurdock
Check this out if you're in the boston area, I've been to a few AR-focused
meetups and they're pretty good at surfacing what is out there and state of
the art:
[https://www.meetup.com/BostonAR/events/255921064/](https://www.meetup.com/BostonAR/events/255921064/)

(and/or email the organizer to get slides/company names if you're not in
boston)

Here's some AR companies working on architecture:
[https://www.archdaily.com/878408/the-top-5-virtual-
reality-a...](https://www.archdaily.com/878408/the-top-5-virtual-reality-and-
augmented-reality-apps-for-architects)

One of the apps mentioned there, Pair, is run by Andrew Kemendo who is an
active participant on HN, started his first AR company in the space in 2011
and is very knowledgeable, check his stuff out:
[https://news.ycombinator.com/user?id=AndrewKemendo](https://news.ycombinator.com/user?id=AndrewKemendo)

In the engineering space PTC/Microsoft (Vuforia/Hololens) and UpSkill (founded
2010) jump out in my mind as leaders. PTC has been doing CAD modeling and full
PML/ALM tooling etc. forever and started to look at AR applications in the
2013/2014 timeframe to build on their modeling, maintenance, and workflow
tracking expertise

A common thread is that many of these companies were founded in 2010/2011 and
have rebranded around the 2015 timeframe to focus on different business
problems and market their solution as "AR"

And yeah I agree, I think the jankiness of the hardware is still a huge
sticking point, nobody has nailed the comfortable glasses form factor with
enough POV/battery/processing power to really make AR take off

------
meritt
I'm having a bit of trouble grokking how this experience works for each
participant.

I get the AR piece: People who are physically in a location they see their
remote peers magically floating in the room with them while they can interact
with virtual objects. That's cool.

But what do the remote people see? They don't have the benefit of AR, so I'd
imagine they don't experience the real-world environment as they're sitting at
a desk with a headset on, or just simply viewing a 3D world like they're
playing The Sims?

~~~
comex
From what I can tell, both sides are using AR goggles. They just don't show
the goggles on avatars of remote peers, which aren't real video but just posed
3D models customized to (somewhat) match the corresponding user's real
appearance.

~~~
meritt
Right, so I guess my question is: The remote users don't have the benefit of
being in the same room as the rest of the group. The AR people/objects
wouldn't likely "fit" into their physical world. Like in the initial demo
video (2 guys sitting, 1 woman standing near Kanban board) -- What does she
see from her perspective?

It feels to me like the "host" participants can use AR but anyone remote would
need to be using VR as the physical environment mapping wouldn't make any
sense for them.

~~~
excalibur
It's not explained clearly, but I believe the same AR elements are mapped into
different environments. One user's wall may not be in the same place as
another's, but they would both have the same content.

~~~
momoro
Correct. Walls are mapped to each other, so if you are looking at your wall in
location A and putting a shared screen there, it is also on my wall at
location B.

There are some gnarly issues around this, of course, but Spatial has a pretty
advanced approach that works in most cases.

------
joshumax
It's quite an interesting and ambitious concept, but I really hope they fix
the avatar designs. I talked to some people around the room and the general
agreement is that it's stuck in a sort of "uncanny valley" where users look
like eerie ghosts with only half of a torso and no legs.

------
smcameron
Anywhere in AR? Will this work in my cabin in Arkansas with no electricity?

~~~
excalibur
If you have an Oculus Go and a Wi-Fi hotspot, and can come up with a way to
charge them both, and you get 4G signal there... Sure, why not?

~~~
jrgoj
He was making a joke. AR == Arkansas

------
poorman
SecondLife 2 coming soon to a basement near you.

------
will_crusher
Spatial developer here. You too can play with these demos.

[https://spatialsys.github.io/res/shots/webapp/](https://spatialsys.github.io/res/shots/webapp/)

[https://spatialsys.github.io/res/shots/webapp/room_mars_terr...](https://spatialsys.github.io/res/shots/webapp/room_mars_terrain)

~~~
8unny1337
Hey will ! Were you able to download them, seems to be offline... LMK Best, B

