
A Software Engineer’s Guide to Unity and AR/VR Development – Part 1 - zen35
https://blog.betawave.io/a-software-engineers-guide-to-unity-and-ar-vr-development-part-1-c20ce973bf8e
======
Impossible
I hate to be that guy on HN but this article set me off as a Unity VR
developer. I understand this is an introduction for people coming from a web
development or other more traditional software engineering background, and the
code is purely demonstrative, but it's full of Unity performance anti-
patterns. Specifically...

1) Resource.Load and GameObject.Instantiate during a runtime loop. Both of
these are very expensive and will generate a ton of garbage.

2) Constantly Instantiating and Destroying GameObjects during runtime. This
will create a ton of garbage on top of the expense of the Instantiate. The
solution to this should be an object pool
([http://catlikecoding.com/unity/tutorials/object-
pools/](http://catlikecoding.com/unity/tutorials/object-pools/) ).
Realistically this is probably outside of the scope of this blog post, but
maybe the author should have avoided Instantiate in an introductory example.

3) GameObject.Find to locate a fixed game object in the scene. This should be
replaced with an object reference (references can be connected in the editor),
or if that doesn't work for whatever reason use Find once in Start or Awake
and store a reference to the object are a member variable. Unity recommends to
not use it every frame in it's documentation
([https://docs.unity3d.com/ScriptReference/GameObject.Find.htm...](https://docs.unity3d.com/ScriptReference/GameObject.Find.html))

4) Debug.Log can be useful in a pinch, but doing it in Update or other
frequently called method can cause performance problems, also printf debugging
has never been great. Visual Studio debugger support is really good in Unity,
use that.

This code is ok if you don't care at all about performance, but in a VR or AR
app you can quickly add milliseconds frame time and make yourself sick writing
relatively simple code in this style.

~~~
amadeusw
Are these issues easily fixed by refactoring? As a newcomer to Unity, I'm ok
writing poorly performing quick code to experiment, and properly implement the
code at a later stage. I'm afraid that I may burn out too quickly if my first
interaction with a platform is too much concerned with patterns.

~~~
Impossible
They are easily fixed with a refactor, but many of the solutions are either
easier to implement or not much more difficult to implement than the example
code.

------
RangerScience
VTRK
([https://github.com/thestonefox/VRTK](https://github.com/thestonefox/VRTK))
will almost certainly make your life better. It's (well done) implementations
of (most? all?) the core AR/VR interaction patterns that have been worked out
so far.

I might go so far as to say it's the ActiveRecord to Unity-as-Rails for AR/VR,
although the analogy is _really_ flawed.

While I like your overall approach (a true, comprehensive intro to Unity), I
would have liked to see more about, I guess I'd call it, the philosophy of
putting code into Unity, covering (specifically and, IMHO, most importantly)
the type of component system they're using, and type of event system they're
using. The equivalent of describing a new language in terms of which features
from the language grab-bag the authors decided to use.

PS - Pluuuuug, because dealing with JSON/YAML in C# suuuuucked:
[https://github.com/narfanator/maptionary](https://github.com/narfanator/maptionary)

~~~
zen35
Yeah, I'm all for more discussions on philosophy instead of just the mechanics
of doing X, Y, and Z. Like how does one augment existing physics to capture
their game's possibly different/modified/caricatured definition of physics?
Can one's games interactions come primarily from emergent behavior or does one
need to do more leg work in the game model? Tons of discussion for later,
hopefully!

Also: VRTK is awesome and JSON/YAML in C# is indeed terrible.

~~~
RangerScience
Checkout the Maptionary library, I'd like to have feedback (and any use other
than myself). The existing stuff for JSON (particularly) is good _if_ you're
serializing into and out of data structures, but if you just want data? Not so
nice.

------
DoctorMemory
This looks like a pretty decent tutorial for it's target audience. However, as
someone who works with Unity on AR and VR the biggest bit of advice I can give
is learn how Unity WANTS you to work with it! Currently we are re-writing an
AR/VR test suite because the original version was written by programmers with
little or no Unity experience. This means they re-wrote scene loading,
cameras, and other parts that were already in Unity (or no one re-factored
when it was added/updated!). I highly suggest catching the Udemy Unity
Certified Developer course on a sale. It will give you an overview of all the
parts of Unity that you might not touch. But it will also get you used to
Unity's workflow.
([https://www.udemy.com/unitycert/](https://www.udemy.com/unitycert/))

~~~
AdieuToLogic

      Currently we are re-writing an AR/VR test
      suite ...
    

The Unity Test Tools[0] asset may be something which could assist in the test
suite(s) your team is re-writing. One tip in using it is that the
TestComponent's in an integration test scene are run in lexicographical order.
So I enforce sequencing when needed by prefixing with "1 - ", "2 - ", etc.

HTH

0 -
[https://www.assetstore.unity3d.com/en/#!/content/13802](https://www.assetstore.unity3d.com/en/#!/content/13802)

------
AdieuToLogic
A couple extra points to complement the article:

\- Boo support is all but dropped, so avoid going with that.

\- The JavaScript variant is not exactly what is in a browser.

\- For C# MonoBehaviours, the class name _must_ match the file name (without
extension) in order for Unity Editor to see it. Namespaces are both supported
and not displayed when adding a component.

\- The Editor identifies file relationships by the GUID assigned in "meta
files" if "Visable Meta Files" is chosen in the "Version Control" entry in the
project settings. All files imported will then have another created with a
suffix of ".meta".

\- Selecting an "Asset Serialization" mode of "Force Text" will cause Unity to
use YAML for things such as scenes, prefabs, meta files, etc. This can make
using Git much nicer as well as enable use of text processing tools (such as
grep or ack).

\- When importing assets from the Asset Store, _always_ review what files the
package wants to import! Many will ship resources such as Standard Assets
which can be outdated and/or cause conflicts.

\- The built-in package management will only update or add files from a
package. This many times leaves stale files lying around which can cause
problems.

EDIT: If using text (YAML) serialization and Git, trying to merge conflicts in
scenes (.unity) or prefabs (.prefab) will almost certainly end badly. Better
to coordinate changes between people through communication, doing a "use mine
or use theirs" conflict resolution strategy when people step on each other's
work.

EDIT-2: Here are three utility assets which I have found to be invaluable (all
are free):

\- Vexe Framework: [https://github.com/vexe/VFW](https://github.com/vexe/VFW)

\- LINQ to GameObject:
[https://www.assetstore.unity3d.com/en/#!/content/24256](https://www.assetstore.unity3d.com/en/#!/content/24256)

\- Unity Test Tools:
[https://www.assetstore.unity3d.com/en/#!/content/13802](https://www.assetstore.unity3d.com/en/#!/content/13802)

------
andybak
I've recently started exploring the world of Unity and the weirdest thing
about it is that it's 90%/10% in favour of video tutorials - even for the
purely code-based aspects.

Even Unity's own official tutorials are all in video form.

~~~
vvanders
That's just the nature of gamedev. It's a mix of code + tools to leverage the
team composition of a development studio.

Usually the composition is something like 10:70 in terms of
dev:art/design/audio/etc, videos are a much easier way to communicate
information in that domain.

~~~
andybak
> That's just the nature of gamedev. It's a mix of code + tools to leverage
> the team composition of a development studio.

Implicit in that statement is the assumption that video+audio is optimal for
teaching complex GUIs. I dispute that.

~~~
munchbunny
It's likely the highest ROI way to package it, relative to active effort, i.e.
excluding time waiting for video encoding. You record a few minutes, narrate
over it, and post it.

~~~
TheGRS
This is where I've found Lynda's tutorials great. They align the text to the
video so you can see and read at the same time. I was able to get pretty far
by reading Unity In Action though, which was purely just a book tutorial and I
really enjoyed it. So I don't know, I would attribute it more to a younger
audience who is more receptive of videos over written tutorials.

~~~
andybak
> I would attribute it more to a younger audience who is more receptive of
> videos over written tutorials.

Is this really true? There's some inherent advantages to words and pictures vs
video and audio that can't be hand-waved away as 'preferences' or 'learning
styles'. Has an entire demographic really chosen a medium that nullifies such
powerful techniques as skimming, copy/paste, seeking without laborious
workarounds, variable speed of consumption etc.

------
gfodor
This book is really good:

[https://www.amazon.com/dp/B014DIV1IO/ref=dp-kindle-
redirect?...](https://www.amazon.com/dp/B014DIV1IO/ref=dp-kindle-
redirect?_encoding=UTF8&btkr=1)

I've been using Unity for the last 3 years and that book covers a good chunk
of what it takes to make a performant game with it. Good books on Unity are
hard to come by. Also highly recommend this book if you are just learning
about game and simulation engines:

[https://www.amazon.com/Engine-Architecture-Second-Jason-
Greg...](https://www.amazon.com/Engine-Architecture-Second-Jason-
Gregory/dp/1466560010/ref=pd_lpo_sbs_14_t_0?_encoding=UTF8&psc=1&refRID=E03JRN77SDF48NBKFWCQ)

------
tezza
I am coding a VR game in Unity as a side project.

Day job is Java HFT FinTech, so I wanted a short bridging course.

There are many blogs about it, but if you really want to learn fast yet
thoroughly I would recommend an online video course.

I recently did Ben Tristem's Unity Course -
[https://www.udemy.com/unitycourse/](https://www.udemy.com/unitycourse/) which
is excellent and shows you all the tips and tricks of the trade. Wishlist the
course and often there are massive discounts

Some non-obvious things:

\- Animation \- Prefabs \- AudioSources \- Colliders \- Scenes and Scene
Management \- localPosition \- Quaternion.EurlerAngles \- Raycast \- finding
named objects in the scene \- relation between GameObjects and Transforms

------
analognoise
All I want is to put on my GearVR, and have a set of floating terminal windows
and various editors.

~~~
BatFastard
That is going to be a long time coming! First of all consider what resolution
you want your terminal to be at. The phone's screen will have to be at LEAST
double that. Then there seems to be a lose of resolution in general when going
3D, just because the lenses are not perfectly aligned.

But I share your desire! 5 more years?

~~~
mileycyrusXOXO
No it isn't. You can do this on a hololens, HTC Vive and Oculus Rift, albeit
with many limitations. Windows MR headsets are on preorder and are 1440x1440
per eye and will let you run any application on them.

~~~
T-A
Yes, but let's run the numbers.

I like to sit in front of a 24' screen, about 50 cm wide and 29 cm high, from
a pretty standard distance of roughly 70 cm. Clicking tan^-1 on my trusty
calculator and rounding up tells me it's covering an horizontal angle of 40
degrees, a vertical angle of 24 degrees.

Like the plurality of Steam users, I have a modest display resolution of
1920x1080 pixels [1]. In other words, I am comfortable with 1920/40 = 48
pixels per horizontal degree and 1080/24 = 45 pixels per vertical degree.

Humans have a binocular field of view of about 200 horizontal degrees and 135
vertical degrees [2]. To satisfy my modest resolution requirements, I would
therefore need 200x48 = 9600 horizontal pixels and 135x45 = 6075 vertical
pixels.

Current headsets do not actually cover the full human field of view, so let's
consider the current favorite, the HTC Vive. It does about 100 horizontal
degrees and 110 vertical degrees, according to [2]. That would work out to
100x48 = 4800 horizontal pixels and 110*45 = 4950 vertical pixels, i.e. more
than 8k UHD.

Ways around that (apart from the obvious) are foveated rendering and
alternative display technologies (light field displays).

[1]
[http://store.steampowered.com/hwsurvey](http://store.steampowered.com/hwsurvey)

[2] [https://www.vrheads.com/field-view-faceoff-rift-vs-vive-
vs-g...](https://www.vrheads.com/field-view-faceoff-rift-vs-vive-vs-gear-vr-
vs-psvr)

~~~
BatFastard
Great numbers T-A, thanks for doing the math.

