
Dream launches online VR collaboration and productivity tool - idanb
https://venturebeat.com/2018/10/04/dream-launches-online-vr-collaboration-and-productivity-tool/
======
idanb
We're super excited to get this out after nearly 3 years of work by our team -
we've put together some further thoughts here:
[https://medium.com/@idanbeck/announcing-
dream-754c0f374da0](https://medium.com/@idanbeck/announcing-
dream-754c0f374da0)

Dream is now available in early access on the Oculus store, so we'd really
appreciate any feedback and thoughts people have. We truly believe that
immersive technologies like virtual reality can make remote work and
collaboration better than existing 2D form factors - especially as the new
standalone VR headsets like Quest come to fruition in the coming year.

Dream has been built entirely from scratch, so we got to rethink a lot of the
stack. We prioritized certain things, like networking and UI, and we're really
proud of the outcome. Doing so also meant it took us a lot longer to bring a
product to release, since there was a lot more to do - but it allowed us to
integrate WebRTC at the native level as well as chromium (by way of CEF) so we
can do things like bring up our keyboard when a user hits a text field.

Hope people like it, and want to say thanks to everyone that made it possible!

~~~
mattnewport
I'm a big believer in the potential of VR for remote collaboration and as a
fully remote VR development team we already make some use of it. Congrats on
shipping but from what I've seen so far its not clear to me what this offers
over Bigscreen or Oculus Dash desktop sharing? There's a bunch of
functionality I'd like to see in this space that nobody has really nailed yet
that I've seen but while this looks like an interesting project the articles
I've seen so far don't do great job of explaining what new functionality it
brings to the space.

~~~
idanb
Would be super interested in what kind of functionality you're specifically
looking for, or is it more that existing functionality doesn't hit the mark?
Our goal with this release is to get some great feedback so we can make Dream
better, and more useful for users so please let us know what you'd like to
see!

To respond to some of your other questions - one thing that we've noticed
isn't super clear is that Dream is not doing any form of desktop sharing. We
integrated chromium at the native layer (by way of a forked version of CEF)
and the content views are all web browsers. This allows for a level of
integration with the rest of our stack in a way that is difficult or
impossible to achieve if you're doing desktop sharing (we actually built
desktop sharing, but disabled it in the build for now until we can solve some
inherent usability problems).

We're big fans of Bigscreen, but I think they're heavily shifting their focus
on entertainment and watching movies in VR together. Also, we were working on
Dream for 1.5+ years when Dash was announced and were excited to see some
similar ideas there! We're trying to find ways to make VR as a viable solution
for remote working and collaboration, and this has led many hard decisions
we've had to make - especially as we've decided to build the entire stack.
This obviously meant it took us a lot longer to get something out there, but
as a result Dream is a lot more intuitive and seamless than you might expect.

For example, our keyboard was heavily inspired by an early Google VR
experiment we saw, but after building out a version of it we quickly
understood why it wasn't getting people to a viable text-entry solution. We
built our own collision system and "interaction "engine" to allow views and
apps in Dream to respond at the event level of "touch start, touch end"
similar to what you'd expect in building an iOS app - and underlying this the
interaction engine would be updating the collision / position of everything in
real time. As a result, we've seen people hit 30-40 WPM on our keyboard due to
the tactile cues we've included (audio/haptics) as well as a kind of
activation region, which allows you to really time and feel out the key press.
Definitely hard to describe this or show this in videos since it's all
happening at 90 FPS - but hey, it's a free download so give it a shot!

Dream never asks you to revert to your monitor or take off your headset, this
was a strict rule. This means that everything from logging in, to inviting
someone new to your team had to be possible in VR. To accomplish this, we
create a kind of chromium integration with Dream so that we could run web
views that manipulated our engine directly. To us, asking the end user to
remove their HMD for any reason is equivalent to asking them to restart their
computer - it's really not acceptable.

Our goal is to demonstrate how immersive technologies like virtual reality can
enable remote collaboration and communication use cases. Especially in terms
of how VR, by comparison to existing 2D formats of video/voice, provides an
improved layer of presence through nonverbal communication cues.

~~~
mattnewport
Ah, ok, yeah that wasn't clear to me from the articles I saw, it sounded like
desktop sharing. I went to the BigScreen talk at Oculus Connect and yeah it's
pretty clear that their focus at the moment is on people watching video
content together in VR but we have used it with some success for code reviews
(desktop sharing is key there, I can jump between Visual Studio and the Unity
Editor).

Your keyboard sounds interesting and I'll have to check it out but to be
honest it's not a big selling point for me as a touch typist: I don't have any
trouble using my keyboard in VR. We do have some non touch typists on the team
though and it's not always convenient to put your Touch controllers down to
type so I can see it being useful.

My ideal VR collaboration app would support at least solid desktop sharing
support, well implemented VR whiteboards (including annotation on the shared
desktop) and 3D sketching like Quill / Tilt Brush. We use whiteboards and 3D
sketching in Rec Room but they're quite primitive. The sketching doesn't have
to match a dedicated sketching app but should be better than Rec Room.

It would also be useful to be able to easily import 3D assets for review, Dash
support for GLTF is looking like a good implementation of that. Custom
environments would also be useful for us so we could do collaborative design
of environments for our own VR app.

~~~
idanb
I was actually at that session as well! It was a bit surprising for me to hear
that they're taking such a strong stance with regards to shared movie
watching, but generally it's always felt that Bigscreen has been tuned for
gamers / entertainment type use cases.

Totally hear you with regards to being comfortable touch typing while in VR,
but I think that this is a pretty big barrier for a lot of users that are not
as comfortable in VR. In our early experiences demoing Dream to people, we
noticed just how overwhelming going into VR is for a lot of people that have
had either no exposure, or very little to it. We used to have computer
keyboard pass-thru, and this could be something that we add back as we
continue to iterate and make the experience better.

In terms of desktop sharing - we used to have this capability, and it's still
in the build but disabled. We pulled it back due to some inherent usability
issues that we're working on as well as performance limits on low-end
machines.

Annotation (whiteboarding on shared content or a white screen) is next up on
our road map, we just didn't have sufficient time to get it into the initial
release - so excited to hear that's something you would be looking for.
Similarly, 3D model import / review is something that we're about to tackle as
well. One of the big things we're excited about exploring is actually using
chromium to do this vs. forcing every client to download what could be a big
file, or push performance limitations on a varied set of machines. Instead,
we'd find a way to utilize WebRTC to stream the content in a way that provides
a 3D review experience for all clients with no performance limit.

On environments, we agree as well - right now we have one environment, and
have 2 others in the pipeline. In the future, it'd be great to allow for 360
stereo videos to be used as the environment or allow teams to customize their
environments if they've got the in house capabilities to do so!

Thanks for the feedback, and hope you get a chance to try Dream out a bit and
give us some hands on experience if you get a minute a well!

~~~
blazespin
I've always thought that conferences / education / seminars was the big sell
here. Productivity didn't seem like a starter, simply because work is such a
cultural thing with all of its own stresses and getting people to adopt these
new techniques is like pulling teeth. Students and conference goers who want
to save on the extremely big bucks that travel and classrooms and conference
halls can cost will see this as a huge win.

Also, the isolated / focused environment of VR could be a big plus for
learning as it blocks out so many distractions. I'd love to see a study done
around that.

~~~
idanb
I'm also extremely excited about education applications for VR, especially
those that benefit from real time communication. For example, learning a
foreign language from a tutor that lives in the origin country of that
language - and being able to interact with them naturally, including the
various nonverbal cues that are crucial when learning a new language in the
same location as someone that has it mastered.

At a slightly higher level, I think VR can unlock a lot of "centralization of
expertise" type use cases. Basically, there's some resource that is
distributed normally but is required to be centralized due to the way that
expertise is consumed. For example, things like call centers, or tutoring - if
those people could instead operate from wherever they might be located while
providing their expertise to customers wherever they may be located, this
could be super useful for both providers of said expertise as well as the
consumers of it.

Definitely excited to see what kind of things applications like Dream can
enable!

------
KineticLensman
Watching the video I was struck by my own experience of creating a virtual art
gallery (of real world photography and 3D renders) in Second Life.

It felt really amazing when I was alerted by a sensor in the gallery that
someone was visiting it and I could teleport back to the gallery to meet them.
Their location in the virtual space (which pictures they were stood in front
of) said something about the pictures that they liked. I could read something
about their personas from the avatar they were using (especially in
conjunction with other scanning tools). My own little gallery was just one of
many and other organisations and groups created much more impressive
interactive environments (admittedly a lot of them seemed to be for various
forms of role-play, some very unsavoury).

The promo video shows the participants in effect in a completely conventional
conference room - one screen and chairs around a table. The wider space
doesn't seem to contribute functionally at all - it's a pretty backdrop but
doesn't display more info that contributes to the meeting. So I'm curious -
could this sort of capability be used to create more dynamic interactions - or
are we limited by the tech (tethered interaction by seated people) to more
constrained situations? (please don't get me wrong - I'm supportive of the
concept - but I'm encountering pushback from colleagues and customers who
don't see the potential)

~~~
wormhoudt
I will say that we at Dream, hold a pretty contrarian point of view on this.
So let me start by saying, it is an opinion, but one we hold pretty strongly.
I can talk a little bit about locomotion, but this sort of philosophy applies
to many of the design decisions we made, for better or worse.

Dream doesn't allow for locomotion by design. Dream is meant to be a place
where people meet to be productive. The environments are intentionally pretty
but not distracting. The focus is on interaction with the other participants
and the shared content. We feel that removing locomotion and reducing
dimensionality is how we will make the interactions with Dream simpler,
especially for new users. Mechanisms like teleportation are super fun and
certainly add to immersion and are the right choice for all sorts of VR
experiences. However, Dream has been built from the perspective that users are
here to collaborate and then go back to real life. In that context, something
like teleportation is fun and novel the first time you use it, but the 10th
time, we feel like most users would just prefer a menu. The overall idea
being, reduce dimensionality to increase precision and simplicity.

I'm happy to hear a critique of this philosophy. Ultimately we have to create
software people love to use, and we certainly understand that we might be
proven wrong about this.

~~~
KineticLensman
I think that's a great response, nicely articulating that you are prioritising
utility over gimmicks.

------
sprash
I work in finance and talked to somebody who implemented a Bloomberg terminal
wall in VR. This would be a great application for VR since you never can have
enough Bloomberg terminals open and usually never have enough screen space.

However he told me that VR can not be used for anything productive right now
because of many problems. One thing is, only very few people can wear a VR
headpiece for 8 to 10 hours straight without getting serious headaches and
dizziness. The resolution has to at least be one order of magnitude higher for
meaningful sized fonts being properly readable. The head pieces have to be one
order of magnitude lighter. If you want to do more than a 2D wall displayed
like a canvas in 3D you have to solve the problem that your eyes automatically
try to focus differently on different depths which is also one major source of
headache.

All in all I was convinced by him that the VR technology is at least 2
generations behind what you would need for serious work. Until then all kinds
of software, SDKs and Hardware will change dramatically. Hence investing in VR
productivity software development right now is a complete waste of time and
money.

~~~
shafyy
> Hence investing in VR productivity software development right now is a
> complete waste of time and money.

Do you think advances in technology just happen by themselves? We need people
to invest time and money RIGHT NOW to get to better software, hardware, and
mainstream adoption.

------
ensiferum
I'm sorry, what's the value added here? In the demo video (in vimeo) the team
is going over some work tasks in Trello. Considering the bad HMD resolution
and general clumsiness of doing things such as typing, I'm really unsure as to
what is the value added of doing a meeting in Dream VR session as opposed to
just regulard desktop/webex type of thing with possibly desktop sharing and
voice/video conf. I work extensively with VR and I have a hard time seeing the
catch here. Thanks!

~~~
idanb
Thanks for the question!

It sounds like you have easy access to VR HMDs, so if you get a chance to plug
into an Oculus rift any time soon I recommend you try it out - and would
definitely love your hands on feedback.

We've done a fair bit of jiggering to make sure a majority of web based
content is both legible and usable - and our UI has been built to try to be as
intuitive as possible, and eliminate a lot of the bumbles that we too
associate with many VR experiences.

It's a free download, and you don't have to create an account if you don't
like - once you download, you'll be presented with an account sign up / log in
form where the keyboard can be used and messed with a bit. We also use
chromium for all of our login / account creation flow in VR - so you can get a
taste of what that feels like as well. If you want to try something like
Trello out, just create a throwaway account and never verify the email - then
you can pull up a website like Trello or NYT, where you can assess the
usability and legibility.

I think that if you're coming from a place of comparing this to existing 2D
based collab tools like Skype / Zoom etc you'll have a hard time seeing the
benefit, but if instead you try to look at how those tools are insufficient
compared to a real-life meeting you might see where we fit. Our goal is not to
replace 2D based methods, but to allow for a level of presence previously only
possible with in-person interactions. This shines in particular in situations
where you're meeting with three or more people along with content at the core
of the meeting.

Hope you get a chance to try it, and would love to hear what you think and how
we can make it better!

~~~
ensiferum
Thanks for the reply. Incidentally I've also worked on a bunch of problems you
guys must have had related to the UX in VR. For example I've also implemented
a bunch of virtual keyboards ;-)

Are you using straight CEF or have you improved the compositor to composit
directly into a texture? IIRC CEF only provides the composited web page in a
bitmap and then you're going to have to do repeated texture uploads which is
going to be a drag.

Does this support VIVE too or only Oculus?

~~~
idanb
Awesome to hear that! Would love to check out your work if you have a link /
video or anything like that!

We actually forked CEF - and had to make a few changes to allow for
integration in the way that we needed. We do use OSR mode, and update a
texture in that way - although we need this buffer anyways, since we're
sending video frames across the peer to peer mesh - so even if we did go
straight GPU, we would still have to download the buffer from the texture.

It's a drag, but there are a number of techniques to improve the performance.
Resolution is one great approach - since the resolution of the HMD makes
having a high resolution on the browser kind of useless, so reducing the
resolution also reduces the pressure on the GPU. Also, we can limit frame rate
based on the kind of content being used - and also, we can leverage dirty
rects to optimize for content that isn't changing. Since we're running
multiple browser tabs, the latter technique isn't as useful for a particular
page, but makes it more performant when a user is doing multiple things, like
watching a video on the shared screen and scrolling through wikipedia or a
news site like NYT.

Up until we consolidated the build for Oculus release, we supported OpenVR and
still do in our code - just not the Oculus build. We've gotten a lot of
interest in the Vive build in this initial release, so might look to
reintroduce that. Before pushing to Oculus, Dream would just launch off the
desktop and detect which HMD you had plugged in and then launch the
appropriate platform. Shouldn't be a ton of work to bring it to Steam!

------
Bobbleoxs
How possible is it for users to have haptic gloves for typing instead of using
controllers? I remain positive on VR productivity tools in future but think we
have to get flexible and creative on hardware. Personally, I would love to
have collaborative meetings with my colleagues worldwide, even just to demo my
modelling ideas on a whiteboard which I think would be tremendously helpful (I
believe FB keynote last year also voiced the same sentiment). I fully agree
with the hardware limitations at the moment but certainly don't think
investment and work now is a waste of time. To push this area forward, we also
need to find compelling experiences that are unique to VR, like remote
presentation rehearsal, collaborative white board brainstorming sessions for
3D design etc. Wearing a VR headset to work 8 to 10 hours straight is not the
answer I look forward to, at least not for now. What VR is strong about to me
todate are: minimising limitations resulted from physical distances and fading
memories of past experience, as well as its ability to create limitless
imaginary worlds, boosting multi-dimensional communication.

~~~
idanb
Early on we actually built out support for leap motion in dream, and this was
super cool because of the networking stack we built - we were able to send all
20 points per hand in real time at 90 FPS at low latencies. This was really an
amazing experience, but there were a lot of issues we simply couldn't overcome
- like wrist occlusion where your hands would suddenly fly off into the
distance, or when your hands didn't do what you intended due to incorrect data
from the sensors. As a product minded company, we had to make the hard
decision to hold on this - at the end of the day, users don't care whose fault
a bad experience is, they just uninstall your app and never come back.

Really excited for new HW and capabilities to become available commercially.
We built out our keyboard to be effective without anything but what's
currently available (6DOF HMD with 6DOF controllers), and we'll continue to
expand support for commercially available capabilities. Maybe it's an
unorthodox perspective, but we really only want to ship and represent
capabilities that any user can attain easily - and not tease things that are
soon to (but may not ever) come.

------
ipsum2
Name collision is unfortunate, thought this was related to Google Daydream at
first, since it uses Google Chrome to access VR.

How is this different than BigScreen? It allows people to view computer
screens (video games, browsers, movies) in VR with other people.

Edit: saw that you answered the bigscreen question earlier.

~~~
idanb
Agreed, the name collision is unfortunate - we actually incorporated before
Daydream was announced. If there are real issues / conflicts with the name,
we're the smaller guys here (our team is just 4 people), so we'll obviously
make the corrections that we need to make over time!

Let me know if my response to the Bigscreen question above is sufficient, or
if you have other specific questions about anything. Would be happy to dig
deeper into anything. Super excited to share the hard work of our team - we've
basically been quietly coding for years now, so this is the first time we
really get to talk about what we've been up to.

------
Roritharr
Isnt resolution the big killer for all of these applications? Even in the
video Trello looked really blurry, i can only imagine what it would look with
the Rift instead of an Vive Pro for example.

~~~
idanb
Resolution is a big deal for sure, but because we're using chromium instead of
desktop sharing we are able to set the resolution to ensure fair visibility
for most content sources. Generally, we see people trying out Dream and
bringing up CNN or NYT and having little issue with reading articles. Sure, if
the resolution of the HMD was better we could do a lot more - but we set the
parameters to optimize for content viewing, and also added FXAA to help
without hurting low-spec machines in terms of performance.

Dream is currently only available for Oculus Rift - and the video was actually
shot with an in-engine camera that we developed, and captured on a mirror
pipeline at 4K - so I think the blurriness in the video may be an artifact of
streaming potentially? Here's a link to the vimeo, which might let you watch
it at the set 1080p resolution we scaled it down to:
[https://vimeo.com/291432708/4c32095226](https://vimeo.com/291432708/4c32095226)

We're excited to get Dream onto other HMDs, especially the mobile standalone
ones coming soon - really great that the Quest is going up to 1600x1440 as it
will make use cases like ours work even better!

