Hacker News new | comments | show | ask | jobs | submit login
A Software Engineer’s Guide to Unity and AR/VR Development – Part 1 (betawave.io)
161 points by zen35 250 days ago | hide | past | web | favorite | 34 comments



I hate to be that guy on HN but this article set me off as a Unity VR developer. I understand this is an introduction for people coming from a web development or other more traditional software engineering background, and the code is purely demonstrative, but it's full of Unity performance anti-patterns. Specifically...

1) Resource.Load and GameObject.Instantiate during a runtime loop. Both of these are very expensive and will generate a ton of garbage.

2) Constantly Instantiating and Destroying GameObjects during runtime. This will create a ton of garbage on top of the expense of the Instantiate. The solution to this should be an object pool (http://catlikecoding.com/unity/tutorials/object-pools/ ). Realistically this is probably outside of the scope of this blog post, but maybe the author should have avoided Instantiate in an introductory example.

3) GameObject.Find to locate a fixed game object in the scene. This should be replaced with an object reference (references can be connected in the editor), or if that doesn't work for whatever reason use Find once in Start or Awake and store a reference to the object are a member variable. Unity recommends to not use it every frame in it's documentation (https://docs.unity3d.com/ScriptReference/GameObject.Find.htm...)

4) Debug.Log can be useful in a pinch, but doing it in Update or other frequently called method can cause performance problems, also printf debugging has never been great. Visual Studio debugger support is really good in Unity, use that.

This code is ok if you don't care at all about performance, but in a VR or AR app you can quickly add milliseconds frame time and make yourself sick writing relatively simple code in this style.


> "This code is ok if you don't care at all about performance, but in a VR or AR app you can quickly add milliseconds frame time and make yourself sick writing relatively simple code in this style."

---

This point needs to be made more often. It's fine to neglect performance when you're writing a CRUD web app, or a non-critical mobile app - the worst that will happen is the user will get annoyed. But in a VR environment dropping frames can make people physically sick.


Yup, agreed on all points! You've hit the nail on the head about article scope. We do mobile VR and these are all definitely no-no's at the end of the day. Discussions on perf will come later. Thanks for pointing these out though!


Are these issues easily fixed by refactoring? As a newcomer to Unity, I'm ok writing poorly performing quick code to experiment, and properly implement the code at a later stage. I'm afraid that I may burn out too quickly if my first interaction with a platform is too much concerned with patterns.


They are easily fixed with a refactor, but many of the solutions are either easier to implement or not much more difficult to implement than the example code.


Can be fixed with a refactor, but it's best to not get sucked into these bad habits at all. If you are learning, might as well learn how to do things the right way from the beginning.

Hooking up references to GameObjects and Components through the editor is some of the Unity "magic sauce." You should be doing things that way, or by hooking things up once during Awake/Start.


Do you have any recommendations on guides/resources we should review?


There are a bunch of Unity Official "Best practice" guides around:

https://unity3d.com/learn/tutorials/topics/best-practices https://docs.unity3d.com/Manual/BestPracticeGuides.html

You should also check out the talks from Ian Dundore, he one of the Developer Relations Engineers from Unity. Here is one of his talks from Unite 2016:

https://www.youtube.com/watch?v=n-oZa4Fb12U

If you can read Simplified Chinese, there is a Unity optimization consulting start-up founded by former Unity China engineers called UWA, there are lots of articles on their blog:

https://blog.uwa4d.com/archives/allinone.html

Doing Unity optimization is hard. There are lots of gotchas and pitfalls to avoid. Unity is extremely sensitive to memory allocation because its Mono runtime is very old. It runs Boehm garbage collector which is non-generational and non-compacting. It's almost certain that there would be a frame drop when GC happens.


VTRK (https://github.com/thestonefox/VRTK) will almost certainly make your life better. It's (well done) implementations of (most? all?) the core AR/VR interaction patterns that have been worked out so far.

I might go so far as to say it's the ActiveRecord to Unity-as-Rails for AR/VR, although the analogy is really flawed.

While I like your overall approach (a true, comprehensive intro to Unity), I would have liked to see more about, I guess I'd call it, the philosophy of putting code into Unity, covering (specifically and, IMHO, most importantly) the type of component system they're using, and type of event system they're using. The equivalent of describing a new language in terms of which features from the language grab-bag the authors decided to use.

PS - Pluuuuug, because dealing with JSON/YAML in C# suuuuucked: https://github.com/narfanator/maptionary


Yeah, I'm all for more discussions on philosophy instead of just the mechanics of doing X, Y, and Z. Like how does one augment existing physics to capture their game's possibly different/modified/caricatured definition of physics? Can one's games interactions come primarily from emergent behavior or does one need to do more leg work in the game model? Tons of discussion for later, hopefully!

Also: VRTK is awesome and JSON/YAML in C# is indeed terrible.


Checkout the Maptionary library, I'd like to have feedback (and any use other than myself). The existing stuff for JSON (particularly) is good if you're serializing into and out of data structures, but if you just want data? Not so nice.


This looks like a pretty decent tutorial for it's target audience. However, as someone who works with Unity on AR and VR the biggest bit of advice I can give is learn how Unity WANTS you to work with it! Currently we are re-writing an AR/VR test suite because the original version was written by programmers with little or no Unity experience. This means they re-wrote scene loading, cameras, and other parts that were already in Unity (or no one re-factored when it was added/updated!). I highly suggest catching the Udemy Unity Certified Developer course on a sale. It will give you an overview of all the parts of Unity that you might not touch. But it will also get you used to Unity's workflow. (https://www.udemy.com/unitycert/)


  Currently we are re-writing an AR/VR test
  suite ...
The Unity Test Tools[0] asset may be something which could assist in the test suite(s) your team is re-writing. One tip in using it is that the TestComponent's in an integration test scene are run in lexicographical order. So I enforce sequencing when needed by prefixing with "1 - ", "2 - ", etc.

HTH

0 - https://www.assetstore.unity3d.com/en/#!/content/13802


A couple extra points to complement the article:

- Boo support is all but dropped, so avoid going with that.

- The JavaScript variant is not exactly what is in a browser.

- For C# MonoBehaviours, the class name must match the file name (without extension) in order for Unity Editor to see it. Namespaces are both supported and not displayed when adding a component.

- The Editor identifies file relationships by the GUID assigned in "meta files" if "Visable Meta Files" is chosen in the "Version Control" entry in the project settings. All files imported will then have another created with a suffix of ".meta".

- Selecting an "Asset Serialization" mode of "Force Text" will cause Unity to use YAML for things such as scenes, prefabs, meta files, etc. This can make using Git much nicer as well as enable use of text processing tools (such as grep or ack).

- When importing assets from the Asset Store, always review what files the package wants to import! Many will ship resources such as Standard Assets which can be outdated and/or cause conflicts.

- The built-in package management will only update or add files from a package. This many times leaves stale files lying around which can cause problems.

EDIT: If using text (YAML) serialization and Git, trying to merge conflicts in scenes (.unity) or prefabs (.prefab) will almost certainly end badly. Better to coordinate changes between people through communication, doing a "use mine or use theirs" conflict resolution strategy when people step on each other's work.

EDIT-2: Here are three utility assets which I have found to be invaluable (all are free):

- Vexe Framework: https://github.com/vexe/VFW

- LINQ to GameObject: https://www.assetstore.unity3d.com/en/#!/content/24256

- Unity Test Tools: https://www.assetstore.unity3d.com/en/#!/content/13802


I've recently started exploring the world of Unity and the weirdest thing about it is that it's 90%/10% in favour of video tutorials - even for the purely code-based aspects.

Even Unity's own official tutorials are all in video form.


I'm seeing this more and more for other things, programming and non-programming, like "How to do XYZ with a Raspberry Pi". I'll generally try to skip the videos because they are almost always awful in terms of time efficiency. Instead of what should be a single page of text with 8 written steps, we now have all these 5 minute videos that start with a 20 second spinning logo intro, then a blank desktop and someone saying "Hey guys, today I'm going to..." It's halfway into the video by the time he's even gotten the subject matter on the screen. Then at the end there's a 30 second outtro with "Hey, guys, if you liked this tutorial, go ahead and like, subscribe and comment below! And check out my Youtube channel and like and subscribe! And if you didn't like this tutorial, go ahead and like and subscribe anyway! Catch my other great videos on my channel and of course like and subscribe!!"


I remember buying books to learn a piece of software (whether COTS or a development library) and inevitably things would go off the rails and not work for some reason. Since then I have learned to be skeptical of written tutorials and prefer the video approach whenever possible. With video I can see the exact steps and don't have to worry as much that I'll get stuck. So I consider efficiency a little differently. I don't mind scrubbing a video if I have to. In the end, I get more value from sources that demonstrably work and reliably teach rather than wasting time on an n step tutorial only to find that step m doesn't work the way the author claims and having to thrash around for the real solution.


I have noticed that with UE4 and houdini as well. I assume to some extent it makes sense if you are actually navigating a GUI as complex as that. Still drives me crazy though.


That's just the nature of gamedev. It's a mix of code + tools to leverage the team composition of a development studio.

Usually the composition is something like 10:70 in terms of dev:art/design/audio/etc, videos are a much easier way to communicate information in that domain.


> That's just the nature of gamedev. It's a mix of code + tools to leverage the team composition of a development studio.

Implicit in that statement is the assumption that video+audio is optimal for teaching complex GUIs. I dispute that.


It's likely the highest ROI way to package it, relative to active effort, i.e. excluding time waiting for video encoding. You record a few minutes, narrate over it, and post it.


This is where I've found Lynda's tutorials great. They align the text to the video so you can see and read at the same time. I was able to get pretty far by reading Unity In Action though, which was purely just a book tutorial and I really enjoyed it. So I don't know, I would attribute it more to a younger audience who is more receptive of videos over written tutorials.


> I would attribute it more to a younger audience who is more receptive of videos over written tutorials.

Is this really true? There's some inherent advantages to words and pictures vs video and audio that can't be hand-waved away as 'preferences' or 'learning styles'. Has an entire demographic really chosen a medium that nullifies such powerful techniques as skimming, copy/paste, seeking without laborious workarounds, variable speed of consumption etc.


Depends on the tool - e.g. LibGDX, MonoGame, Cocos2D-x - are more programmer-friendly, i.e. code-oriented approach. Unity, Atomic, Godot - these are designer's tools.


I guess it's because Unity is more designer's tool rather than programmer's - 90% of the time you just click and drag mouse around the UI.


See my answer below. Video and audio is still sub optimal even for GUI tutorials. It's the fact the Unity also involves significant amounts of coding that pushes this from "sub optimal" to "gob-smackingly awful".


This book is really good:

https://www.amazon.com/dp/B014DIV1IO/ref=dp-kindle-redirect?...

I've been using Unity for the last 3 years and that book covers a good chunk of what it takes to make a performant game with it. Good books on Unity are hard to come by. Also highly recommend this book if you are just learning about game and simulation engines:

https://www.amazon.com/Engine-Architecture-Second-Jason-Greg...


I am coding a VR game in Unity as a side project.

Day job is Java HFT FinTech, so I wanted a short bridging course.

There are many blogs about it, but if you really want to learn fast yet thoroughly I would recommend an online video course.

I recently did Ben Tristem's Unity Course - https://www.udemy.com/unitycourse/ which is excellent and shows you all the tips and tricks of the trade. Wishlist the course and often there are massive discounts

Some non-obvious things:

- Animation - Prefabs - AudioSources - Colliders - Scenes and Scene Management - localPosition - Quaternion.EurlerAngles - Raycast - finding named objects in the scene - relation between GameObjects and Transforms


All I want is to put on my GearVR, and have a set of floating terminal windows and various editors.


That is going to be a long time coming! First of all consider what resolution you want your terminal to be at. The phone's screen will have to be at LEAST double that. Then there seems to be a lose of resolution in general when going 3D, just because the lenses are not perfectly aligned.

But I share your desire! 5 more years?


> That is going to be a long time coming! First of all consider what resolution you want your terminal to be at. The phone's screen will have to be at LEAST double that.

Your VR view is merely a window on a much bigger virtual screen. At worst you want an entire virtual window to be visible in your field of view without moving your head and all VR capable phones can easily manage that resolution.


No it isn't. You can do this on a hololens, HTC Vive and Oculus Rift, albeit with many limitations. Windows MR headsets are on preorder and are 1440x1440 per eye and will let you run any application on them.


Yes, but let's run the numbers.

I like to sit in front of a 24' screen, about 50 cm wide and 29 cm high, from a pretty standard distance of roughly 70 cm. Clicking tan^-1 on my trusty calculator and rounding up tells me it's covering an horizontal angle of 40 degrees, a vertical angle of 24 degrees.

Like the plurality of Steam users, I have a modest display resolution of 1920x1080 pixels [1]. In other words, I am comfortable with 1920/40 = 48 pixels per horizontal degree and 1080/24 = 45 pixels per vertical degree.

Humans have a binocular field of view of about 200 horizontal degrees and 135 vertical degrees [2]. To satisfy my modest resolution requirements, I would therefore need 200x48 = 9600 horizontal pixels and 135x45 = 6075 vertical pixels.

Current headsets do not actually cover the full human field of view, so let's consider the current favorite, the HTC Vive. It does about 100 horizontal degrees and 110 vertical degrees, according to [2]. That would work out to 100x48 = 4800 horizontal pixels and 110*45 = 4950 vertical pixels, i.e. more than 8k UHD.

Ways around that (apart from the obvious) are foveated rendering and alternative display technologies (light field displays).

[1] http://store.steampowered.com/hwsurvey

[2] https://www.vrheads.com/field-view-faceoff-rift-vs-vive-vs-g...


Great numbers T-A, thanks for doing the math.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: