1) Resource.Load and GameObject.Instantiate during a runtime loop. Both of these are very expensive and will generate a ton of garbage.
2) Constantly Instantiating and Destroying GameObjects during runtime. This will create a ton of garbage on top of the expense of the Instantiate. The solution to this should be an object pool (http://catlikecoding.com/unity/tutorials/object-pools/
). Realistically this is probably outside of the scope of this blog post, but maybe the author should have avoided Instantiate in an introductory example.
3) GameObject.Find to locate a fixed game object in the scene. This should be replaced with an object reference (references can be connected in the editor), or if that doesn't work for whatever reason use Find once in Start or Awake and store a reference to the object are a member variable. Unity recommends to not use it every frame in it's documentation (https://docs.unity3d.com/ScriptReference/GameObject.Find.htm...)
4) Debug.Log can be useful in a pinch, but doing it in Update or other frequently called method can cause performance problems, also printf debugging has never been great. Visual Studio debugger support is really good in Unity, use that.
This code is ok if you don't care at all about performance, but in a VR or AR app you can quickly add milliseconds frame time and make yourself sick writing relatively simple code in this style.
This point needs to be made more often. It's fine to neglect performance when you're writing a CRUD web app, or a non-critical mobile app - the worst that will happen is the user will get annoyed. But in a VR environment dropping frames can make people physically sick.
Hooking up references to GameObjects and Components through the editor is some of the Unity "magic sauce." You should be doing things that way, or by hooking things up once during Awake/Start.
You should also check out the talks from Ian Dundore, he one of the Developer Relations Engineers from Unity. Here is one of his talks from Unite 2016:
If you can read Simplified Chinese, there is a Unity optimization consulting start-up founded by former Unity China engineers called UWA, there are lots of articles on their blog:
Doing Unity optimization is hard. There are lots of gotchas and pitfalls to avoid. Unity is extremely sensitive to memory allocation because its Mono runtime is very old. It runs Boehm garbage collector which is non-generational and non-compacting. It's almost certain that there would be a frame drop when GC happens.
I might go so far as to say it's the ActiveRecord to Unity-as-Rails for AR/VR, although the analogy is really flawed.
While I like your overall approach (a true, comprehensive intro to Unity), I would have liked to see more about, I guess I'd call it, the philosophy of putting code into Unity, covering (specifically and, IMHO, most importantly) the type of component system they're using, and type of event system they're using. The equivalent of describing a new language in terms of which features from the language grab-bag the authors decided to use.
PS - Pluuuuug, because dealing with JSON/YAML in C# suuuuucked: https://github.com/narfanator/maptionary
Also: VRTK is awesome and JSON/YAML in C# is indeed terrible.
Currently we are re-writing an AR/VR test
0 - https://www.assetstore.unity3d.com/en/#!/content/13802
- Boo support is all but dropped, so avoid going with that.
- For C# MonoBehaviours, the class name must match the file name (without extension) in order for Unity Editor to see it. Namespaces are both supported and not displayed when adding a component.
- The Editor identifies file relationships by the GUID assigned in "meta files" if "Visable Meta Files" is chosen in the "Version Control" entry in the project settings. All files imported will then have another created with a suffix of ".meta".
- Selecting an "Asset Serialization" mode of "Force Text" will cause Unity to use YAML for things such as scenes, prefabs, meta files, etc. This can make using Git much nicer as well as enable use of text processing tools (such as grep or ack).
- When importing assets from the Asset Store, always review what files the package wants to import! Many will ship resources such as Standard Assets which can be outdated and/or cause conflicts.
- The built-in package management will only update or add files from a package. This many times leaves stale files lying around which can cause problems.
EDIT: If using text (YAML) serialization and Git, trying to merge conflicts in scenes (.unity) or prefabs (.prefab) will almost certainly end badly. Better to coordinate changes between people through communication, doing a "use mine or use theirs" conflict resolution strategy when people step on each other's work.
EDIT-2: Here are three utility assets which I have found to be invaluable (all are free):
- Vexe Framework: https://github.com/vexe/VFW
- LINQ to GameObject: https://www.assetstore.unity3d.com/en/#!/content/24256
- Unity Test Tools: https://www.assetstore.unity3d.com/en/#!/content/13802
Even Unity's own official tutorials are all in video form.
Usually the composition is something like 10:70 in terms of dev:art/design/audio/etc, videos are a much easier way to communicate information in that domain.
Implicit in that statement is the assumption that video+audio is optimal for teaching complex GUIs. I dispute that.
Is this really true? There's some inherent advantages to words and pictures vs video and audio that can't be hand-waved away as 'preferences' or 'learning styles'. Has an entire demographic really chosen a medium that nullifies such powerful techniques as skimming, copy/paste, seeking without laborious workarounds, variable speed of consumption etc.
I've been using Unity for the last 3 years and that book covers a good chunk of what it takes to make a performant game with it. Good books on Unity are hard to come by. Also highly recommend this book if you are just learning about game and simulation engines:
Day job is Java HFT FinTech, so I wanted a short bridging course.
There are many blogs about it, but if you really want to learn fast yet thoroughly I would recommend an online video course.
I recently did Ben Tristem's Unity Course - https://www.udemy.com/unitycourse/ which is excellent and shows you all the tips and tricks of the trade. Wishlist the course and often there are massive discounts
Some non-obvious things:
- Scenes and Scene Management
- finding named objects in the scene
- relation between GameObjects and Transforms
But I share your desire! 5 more years?
Your VR view is merely a window on a much bigger virtual screen. At worst you want an entire virtual window to be visible in your field of view without moving your head and all VR capable phones can easily manage that resolution.
I like to sit in front of a 24' screen, about 50 cm wide and 29 cm high, from a pretty standard distance of roughly 70 cm. Clicking tan^-1 on my trusty calculator and rounding up tells me it's covering an horizontal angle of 40 degrees, a vertical angle of 24 degrees.
Like the plurality of Steam users, I have a modest display resolution of 1920x1080 pixels . In other words, I am comfortable with 1920/40 = 48 pixels per horizontal degree and 1080/24 = 45 pixels per vertical degree.
Humans have a binocular field of view of about 200 horizontal degrees and 135 vertical degrees . To satisfy my modest resolution requirements, I would therefore need 200x48 = 9600 horizontal pixels and 135x45 = 6075 vertical pixels.
Current headsets do not actually cover the full human field of view, so let's consider the current favorite, the HTC Vive. It does about 100 horizontal degrees and 110 vertical degrees, according to . That would work out to 100x48 = 4800 horizontal pixels and 110*45 = 4950 vertical pixels, i.e. more than 8k UHD.
Ways around that (apart from the obvious) are foveated rendering and alternative display technologies (light field displays).