
Correcting 360 Degree Stereo Video Capture - opticalflow
https://opticalflow.wordpress.com/2016/05/11/correcting-360-degree-stereo-video-capture-part-1-of-3/
======
Torkel
Interesting and fun to read about experiments like these! Seems very well
funded. As I read it the conclusion is that by combining multiple cameras (and
possible dedicated depth cameras) a depth map / 3D model can be created, and
that that is good for 360+3D / VR.

After working for a while on live VR video, I have come to the same
conclusion. It's actually the basis for me and my company's current work in
the area. (Perhaps there is interest in hearing more about our approach, if so
here is a saved 360 livestream from an event last night when we talked about
it:
[https://www.youtube.com/watch?v=BVQt0SrLzHY](https://www.youtube.com/watch?v=BVQt0SrLzHY)
it's about 15 minutes in)

~~~
moflome
Looked like a nice meetup & thanks for sharing the video link. It was a bit
hard to view the slides from within the video, do you have a copy of your
presentation you can share as well? Thanks!

~~~
Torkel
Yeah, I know it's hard to see the slides, and I haven't put them up online,
sorry... The plan was to split the hdmi signal and put slides etc on a virtual
screen in the 360 video. ...but the hdmi splitter didn't work. It turns out
hdmi splitters have a hard job (2way com in hdmi). We'll probably implement a
loopback, so hdmi into the computer is looped back out untouched so there is
no need for splitting it. So next presentation we'll stream should be more
watchable!

------
i336_
This sounds like a company that's trying to get noticed. If their tech holds
up as well as they say it does, kudos to them, I hope Facebook sees them.

~~~
opticalflow
Author of TFA here -- it was less about getting noticed than my not wanting to
let the technology and possibilities languish in our back pocket. It was less
relevant a couple years ago because the 3D market kind of crashed, but more
relevant today because of VR.

~~~
i336_
Wow, thanks for tracking this article down and making an account to reply!

(I'm not OP.)

I see.

As someone who has no idea about VR in general, here are a bunch of ideas.
You've probably already thought of these, but here goes...

\--

Figure out how you want to react to the market - do you want to be absorbed
into one company, or provide options for several? IMO, Facebook's [hyper]focus
on VR at the moment is a bit scary -
[http://i.imgur.com/fMcp2UZ.jpg](http://i.imgur.com/fMcp2UZ.jpg) \- so while
you would most certainly be promised the world by FB if your tech is what
you're saying it is... well, it wouldn't take too long to see the edges of the
walled garden once you were trapped inside it, I think, because once you'd
been acquired that would be that.

I guess the main question is, how long do you think you can keep your approach
well-veiled enough that other engineers won't figure it out and clone it?
Given that timeframe, do you think merging with FB or Samsung or whatnot will
give you more money AND opportunities than if you formed your own business
entity that interacted with multiple companies? A buyout is obviously going to
provide more money, it's the opportunity quotient that needs to be considered
here.

If what you've built contributes the capabilities you're claiming, you'll need
backbone above all else, to stick to your guns.

\--

Since you have experience with shooting video with this setup, I say start
with that, and upload random things to YouTube. I imagine you'll get noticed
pretty quickly whatever you choose to shoot; I'm thinking a mix of "very
recognizable events for which your footage will be a useful contribution,
while not stepping on the toes of the official broadcaster(s)" along with
random candid/informal stuff to keep things from getting too serious (which I
get the impression happens real quick in startup- or startup-alike
situations).

I've constantly added the "if the tech holds up" conditional to my comments
precisely because I've not seen how this compares in terms of actual footage.
A blog writeup is great, but _you need video_. This is about video, people
need to see it!

My total exposure to VR amounts to 5 minutes w/ an Occulus Rift (which left me
with really sore eyes xD), and I've only seen a tiny handful of 3D movies (one
of which I watched by doubling the height of an SbS video and crossing my
eyes, which amazingly _didn 't_ leave me with a giant headache), so I'm
absolutely unqualified to compare the quality of 3D footage... but with that
said, I get the impression that even I would probably notice a difference
looking at the result of this system vs. current systems. But I can't quantify
even that without a reference to look at.

I recommend going all-out and processing some 8K video, 360° video, etc etc.
(Upload the 8K to YT in 4K, with the original in eg Backblaze B2, which I
understand is cheaper than AWS.)

\--

 _Possibly_ let people send you footage for you to apply your algorithm to,
for testing / comparison purposes. This would be awesome for getting your foot
in the door with multiple clients.

Maybe footage less than N minutes or seconds is free, with a pricing structure
determined to sit just above the threshold that would shoo away people who
would be a waste of time for whatever reason. (I have no idea how this works.)

In fact, you could probably net yourself a nice cottage-industry-sized income
from post-processing people's videos for them, or make this model your entire
focus, but that's not as heady as a buyout would be.

~~~
opticalflow
All very good considerations. The "live" VR 360 capture market is really just
getting started. I like the idea of "post-processing" as a service,
especially. It might be down the road a few months though as the market for it
develops.

~~~
i336_
:)

To me, it sounds like you have the perfect head start to this whole scene. I
can see why you want people to know this exists...

Your quoting "live" there made me hesitate on the word for just long enough to
start wondering about realtime processing, which I get the impression your
system is capable of (if you were able to do SD card tests and instantly
preview the results).

Now I'm wondering what would happen if you ran your software on a box in the
cloud with a 1Gbit link, where people feed in their dual- or tri-camera raw
footage, and perfect 3D (and maybe with a hefty premium, calculated depth
info) comes out the other end - all in near-realtime. Can you manage less than
1s of latency? 500ms even? (Okay maybe I'm being too optimistic here)

Hmm, and maybe you could provide direct ingress support for broadcast
packs....

And thinking about it, depending on how CPU-intensive your system is, you
might be able to run this on relatively inexpensive hardware per 1G/2G link.
(I know EPB offers commercial 10G installation in Chattanooga for $300/mo, but
I'm not sure what their datacenter pricing is.)

I really like the idea of post-processing as a service - considering what
options are on the table, that direction seems the least complex and the most
straightforward.

(PS. Please excuse the rather non-provisional and overly
concrete/direct/instructional wording style of my previous post - my brain's a
little under the weather at the moment, and grammaring how I'd prefer to is
proving occasionally challenging.)

~~~
opticalflow
Well, I think you're on the right track. Right behind me are 30 supermicro
pizzboxes (some have GPUs) that are racked but powered down and looking very
lonely (but with 6 dark fibers also feeling lonely at the moment). Perhaps I'd
consider an "encoding.com for VR". Honestly, the main part I'm missing is
real-time stitching, but maybe Facebook's open source release for their
Surround 360 software later this summer will solve that!

