It teaches from very basics, at the same time the projects are diverse and fun because 3D-assets and effects are provided.
Chunk size is perfect, few minutes video and then it's few minutes of work in the editor. Videos have short text summary so there's no need to rewind the video if I missed something.
Often it solves a problem in a naive but incorrect way, and then fixes it. So when I encounter a problem in real project, I often have experience dealing with it.
It has debugging projects, where you have a complete project which is broken in multiple ways. So smart. In my regular programming work I spend most time debugging, not creating from scratch.
The narrator (Carl D.) is charismatic, videos are very professional.
I wish there were more courses with same structure and quality. Can't recommend it enough.
I just watched the intro video, it seemed like he was yelling the whole time...
Look up all the amazing things that can be made with this engine. It's incredible!
Disclosure: I'm a part of the Minecraft team.
> which other companies are offering free moocs and/or video tutorials because of the COVID-19?
If anyone has a firsthand experience with Unity Learn, I'd love to hear about it, and I'm sure others will find it useful too.
In fact, it was my best experience among online courses on any subject.
This is how I feel about most open source software documentation and tutorials/articles about library or software usage.
I know a bunch of JS game engines, but they all have no tooling.
We're doing special offers for education due to the pandemic:
A key feature for schools is granting access to Construct 3's full features with access codes, meaning students do not need to proivde us with any login details/emails etc which is popular in educational institutes:
I was a bit put off by the subscription model, but that includes a backend service for multiplayer games? Nice.
What are the limits?
We also have a build service for paid plans that allows for easy building of your games into deployable signed APKs.
If the subscription is a deal breaker, a lot of users do use our free edition which has a 50 event limit if you use it logged in to a Construct.net account (and a few other limits).
Can I simply check-in a project into GitHub? Are there any special file-types (beside sounds and graphics) that would make this problematic?
Can I use my favorite editor to write the code?
But thanks for the suggestions!
There are many application domains, especially AR and VR and accelerometers and cameras, that are impossible to simulate in the Unity editor, and developing them requires quick turn-around and interactive debugging. That's where UnityJS really shines!
Here's a demo of an earlier version of UnityJS integrated with ARKit on iOS for Pantomime:
WovenAR Tools with ARKit and Pantomime
You can't possibly be serious here. You are comparing an ecosystem where the package manager can be gamed to includes malware in your code, where a small project need includes 1000+ libraries some having just one function in them, where core projects used by GAFAM can become unmaintained because of lack of funding, where most modules are created by random developers and will break your code on update to an ecosystem with a two decade old battle-tested extensive base class library made entirely by professionals, maintained by one of the biggest tech corp with a safe package manager. Also breaking API change from third party library are easily detected thanks to the type system.
Take d3, for example. It's excellent, well maintained code, that splendidly solves many practical problems. There is nothing anywhere near as powerful and flexible and well documented and maintained (and free!) as d3 for Unity.
A 2gig mobile game should take 20 minutes to build to device tops and that's all from asset compression anyway. Recompiling the code should take no more than a minute unless you're using a potato.
The part of the compilation that actually eats your machine because its the only multithreaded part is the shader compilation, again not C#.
I don't know what kind of toy games you're compiling that take no more than a minute to completely compile and deploy with Unity3D: "2gig" of what, code or just video?
I'm not talking about pressing "Play" in the editor or deploying on Windows or Mac with Mono, I'm talking running it in the browser with the WebGL back-end, or on an iPad with the iOS back-end, using il2cpp and Emscripten or XCode, which are enormously slow and complex.
It certainly does take a long time on my Mid 2014 MacBook Pro (which while not new, is certainly not a potato), and it makes it useless for doing anything else while it's compiling, and spins the fan up to its highest speed and pins the CPU all that time.
A large complex multi player networked AR/VR iOS app like Pantomime with a lot of content including code and libraries and resources and plugins and shaders regularly took me a good part of an hour to compile, build and deploy. Developing UnityJS was my response to that problem, so I could rapidly iterate by changing code and JSON data and other resources, without recompiling.
WebGL and iOS builds are especially slow, because they go through layer after layer of cross-compilers, from C# to CIL with the Mono compiler, and then from CIL to C++ with il2cpp, then the XCode/CLang/Assembler/Linker or Emscripten/WebAssembly chains do their own ridiculously complex things, and there's also a significant amount of time spent packaging and compressing resources and data in various formats and wrappers.
You have to wait not just for the C# code to compile, and for the shaders and the resources to be processed, but for the entire multi-level Rube-Goldbergesque translation and packaging process to finish, then that build must be deployed on your web server or mobile device.
I'm talking about the actual turn-around time between when you make a change to the code, and see the results. You know, the thing you have to do again, and again, and again, and again, and again. So it adds up quickly.
UnityJS drastically slashes that time, however long it takes (and I have a hard time believing it takes no more than a minute for you, unless your app is trivially simple), to just a few seconds of refreshing the web page or quitting and restarting the iOS app.
It's very experimental still, and iteration times for web are still not amazing, but we do think we have a way to get them to be actually good, while still letting you write and debug C# as you would expect.
Edit: You should look into Rider too. I much prefer it over MonoDevelop. A mid 2014 MBP could be poor if you're running a low RAM config. They only went up to 16GB and if you have 8 you could be hitting swap and it'll be dog slow.
I'm a big three.js fan, but if I was building something where I could get a lot of benefit from placing/configuring objects through a visual editor I'd definitely use PlayCanvas for it.
Finally, old versions of unity had some js support. You could maybe track down one of them, but the support was removed eventually
It's MUCH better to be able to simply drop in the latest version of SocketIO, or the Stripe API, or D3, or whatever you need, and go to town drawing all kinds of graphs and diagrams by copying and elaborating the great examples on observablehq.com, then use those html canvas images on Unity3D textures of 3D objects and user interface overlays, instead of foolishly trying to reimplement D3 in C#.
Hybrid Unity/Web apps are useful, because Unity doesn't have a decent immediate mode 2D drawing API like canvas or D3, or a decent text/graphics layout engine like HTML/CSS/SVG. (I love TextMeshPro, but it just isn't capable of everything that's so easy to do in HTML, or able to leverage higher level HTML templating or formatting or graphics libraries, or even form inputs. And writing a Unity shader or constructing a dynamic 3D mesh to draw a pie chart is such a silly overkill, when you can do it so easily with canvas or D3.)
Do you want to target WebGL? It works but it’s less than ideal. If you want to make a WebGL app, Unity would not be my choice. The tooling used to fall into the “that works, but it’s cursed” category, because it would compile the C# to CIL with Mono, then compile CIL to C++ with il2cpp, and the C++ to JS with Emscripten. I think. If you think this pipeline is “extremely cursed”, well, you’re not alone.
If you want to make a browser game with tooling, there are some better options. Different options for 2D and 3D. You’re going to have to accept some kind of compromise and the ecosystem is a bit difficult to navigate.
And yeah, agreed, it's probably pretty clunky compared to what some WebGL-specific frameworks might give you. I don't know if anything out there has quite the UI setup or polish as Unity though. Do you know of any?
Edit: Saw your edit. That sounds super cursed.
If you’re looking for something with good UI and polish, maybe GameMaker Studio fits the bill? I don’t keep tabs on this space. I use Unity when I’m collaborating with people, and use simpler frameworks when I’m working alone.
The great thing about Unity that makes it worth all the effort of making up for its other weaknesses is its editor, and the ability to extend the editor with custom user interfaces, by widgets and editors in the object property sheets and 3d gadgets and objects in the world itself. The editor and its customizable UI facility is a common ground for artists and programmers to meet, that enables programmers to give artists a huge amount of power and flexibility, and artists to see and totally control what they're creating immediately and interactively.
Here's a demo of Unity3D pie menus that shows and explains some custom editors, as well as in-world editing tools. (It's kind of old, so the demo requires the now-obsolete Unity browser plug-in, since it isn't compiled for WebGL.)
I've made a general purpose pie menu component in C# for Unity3D, which supports text items, image items, and 3d object items too!
I will make it available as free open source software on the Unity3D app store!
Here's a silly demo, showing a set of SimCity pie menus:
(If you don't have the Unity3D browser plug-in installed, it should show you a link to install it.)
They have a full set of useful notifiers so you can tightly integrate them with your application to give rich feedback during tracking (for example, modifying the 3D menu items, or previewing the effect of the menu item and distance parameter in real time, making them more like "direct manipulation").
For example, to show how you can implement feedback like The Sims pie menus with the head in the center that looks at the selected item, I've made a 3d object in the pie menu center with the webcam texture on it, so YOUR head is in the center of the menu, looking at the selected item! (That's why the demo asks for permission to use the webcam.)
The pie menu and each item has a title as well as a description. One feature I've added is the ability not only to disable an item, but also to provide an explanation of why the item is disabled! (PacMan in the demo is disabled, for example.) I wish other menus and widgets would do that -- it's frustrating when you can find an item you want, but can't figure out why it's disabled!
Another nice thing about them is that you can either configure algorithmically with an API, or with JSON data (which makes it easy to make dynamic data driven menus downloaded from a server or database), or construct them in the Unity3D editor out of objects (which makes it easier for artists to design them)!
I've made a custom Unity3D editor that lets you edit the properties, drag and drop textures and objects, edit and rearrange the items, and has some convenience commands, so you can place the 3D item objects in a circle in the 3D world, and call a command that figures out which item is in which direction by their position, and tidies them up. (That is much easier than arranging their order in a linear list of items.)
I'm going to play around with more in-world editing features, to make them easier for artists to design them.
And no matter how bad it is as a language, it still beats compiled languages as a dynamic extension and scripting language.
Even Adobe finally figured out they should use the real thing instead of jerking you around with ActionScript or some other weird language like AppleScript.
I’ve done some pretty deep comparisons of extension languages and my conclusion is still the same—differences in language implementation dwarf differences in language. You need a sandboxed extension language? That’s an implementation issue. You need to avoid dynamic libraries or run-time code generation? Implementation issue.
Show me a Unity3D app that runs on WebGL or iOS or Android, that compiles code at runtime. If you can link to a github repo so I can so how the code actually works, that would be even better. And I'd love to hear from the developer first-hand how they convinced Apple to bend their stringent app store rules just for them, but nobody else.
0 - https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotati...
glMatrix is a fine way to work with quaternions.
Unity also had another language called Boo, which was fake Python which also compiled to the CLR, and we're happy that's dead too.
There's some packages like MoonSharp which will let you write Lua for Unity (modulo some bindings), and a few visual scripting tools too. There's also going to be a native visual scripting tool coming out in the next year or so.
You can't dynamically download and interpret UnityScript code at runtime, or include it in downloadable asset packages.
This comes into play on the Apple app store.
Even if you implemented, say, a Scheme interpreter in C# that could interpret s-expressions at runtime, Apple would not let you publish an iOS app on their app store that downloaded Scheme code and interpreted it. It's not impossible or hard to do, but it's simply against their rules.
That is a hard non-technical constraint that there is no way of programming or negotiating your way out of, and it's not going to change any time soon.
The old version of the iOS Safari component, UIWebView, runs in-process, and allows you to call back and forth with native code (it has a nice native Objective C bridge), but it did not have a JIT so it was slow. (Sandboxed iOS apps developed outside of Apple aren't allowed to write to code memory, so JIT compilers are prohibited.)
But you can't call native code, because it's running in another process, and doesn't support an Objective C bridge.
I love to "actually code" games, but some graphic stuff is a pain to do without GUI.
Seeking to collaborate with people who can see and benefit from the obvious and subtle applications to rapid prototyping, exploratory iterative development, interactive debugging, live programming, deeply integrating web technologies and JSON with Unity3D, scriptable VR and AR platforms, and delivering open-ended extensible 3D browser-like applications on WebGL, mobile and desktop platforms.
So far I've applied UnityJS to JauntVR's panoramic VR video player on Android, WovenAR's scriptable AR platform on iOS, and ReasonStreet's interactive financial data driven visualization system on WebGL, and I'm looking for other interesting people to work with on exciting and fitting applications for UnityJS!
Here are some other things I've written about it on HN (in chronological order):
"UnityJS", as you call it, is rife with landmines only discovered at run-time, as one would expect from a typeless language.
So how do you like the C# and UnityScript debugging tools on the iOS, Android, and WebGL Unity3D platforms?
Can you set a source level breakpoint on your C# or UnityScript code that's running on an iOS device in a WebGL/WebAssembly based browser?
And how long does your typical Unity3D application take to compile and deploy, before you can see the changes you made to your code?
(I know the answers, I'm just asking rhetorically, because you know the answers too: Debugging Unity C# or UnityScript code totally sucks, especially on mobile devices and web browsers, and recompiling it is glacially slow. The MonoDevelop debugger is terrible, and it doesn't even support the il2cpp back-end, which includes iOS and WebGL, and it barely supports the deprecated Mono back-end, and crashes all the time if you can even get it to connect for a few seconds. Those are extremely painful problems that UnityJS solves.)
We (a FAANG) are actually looking for a dev experienced with Unreal on mobile devices. If anybody sees this and is interested, hit me up. Contact info in my profile.
Edit: Added emphasis that we're looking for an Unreal mobile dev, not Unity.
It is also what Google sponsors for Stadia and Android games, instead of doing their own SceneKit.
I have some locomotion ideas I want to try out but I don’t know
Where to start.
In terms of level design VR has a looot more fidelity of input so the interaction design is richer and consequently there is more to setup. Game spaces tend to be less cluttered and have some slightly distorted dimensions. Both are more noticeable in VR. Games tend to have more fixed sight lines, the player is stood, crouched and maybe prone at most. In VR people will stick their heads everywhere.
> In VR people will stick their heads everywhere.
Particularly when players have any level of knowledge about what kind of complexities or edge cases are likely involved (eg. any software developer or QA, even if not part of the gaming industry). Or... hell, maybe it's even worse when you have ignorant players who expect the VR environment to mimic real life so perfectly that they get frustrated and can't understand why certain actions aren't supported/working.
The first VR game I got to experience was one of the haunted house horror games, and you're damn right I bent down and tried to shoved my head into an open cupboard just to see if the collision detection stopped at the outer box of the model, or whether my head would be allowed to enter the space. Then repeatedly leaned/shoved my head against VR walls at various angles to see if I could get the camera to clip or bounce/reposition jarringly. Poor, poor developers who have to try and nail all that logic perfectly. It must be so rewarding to see final results when everything works out well, though. :P
Yes it's rather easy to have the 3D hands tracked ingame, but VR requires much more affordable interactions than a regular 2D game, and that is way trickier to handle.
A practical example: In the Half-Life games, a common trope/puzzle is finding a door (or some other blockage) that has one of those "submarine hatch" type hand crank to open, but it has been misplaced.
In the 2D games, you just need to pick up the crank, go to the door, and hold the interact button on the placed crank to spin it. But in the latest installment (Alyx, which is in VR), you have to actually grab it with your hand, carry it to the door, snap it into place and spin it.
So now technically you have to handle physics joints between the hand models and the crank, ensuring it visually stays attached, but also tracking the position such that if the player moves away while holding the crank their hand doesn't just stay there forever.
And since the player is still physically allowed to move their arms with the model attached to the crank, you have to ensure other interactions that depend on tracked position do not engage, such as grabbing items from the backpack.
That is way more effort than just checking line of sight to a collision box while a key is down, and playing an animation. And writing such a system (even with the provided frameworks) is still a lot of effort.
That, and the level design has to change a lot between 2D and VR games, due to how a player can do tricky things like crouching to see under objects, and you as a designer/developer can never stop the camera from moving.
It is still a nice engine for small teams though.
Unity's GUI solutions so far are plagued by terrible performance on the common usage scenarios, terribly undocumented when it comes to making your own components, and the layout system is simply a big mess.
While Godot's isn't anything particularly worth praise by itself (especially when compared to dedicated UI toolkits), the simple fact that creating a simple 2-screen interface does not melt a mobile device's CPU puts it far beyond Unity. That, and the layouting is relatively saner.
So much so, that at the company I work for (which is sort of consulting, not products), when we need to do 2D or UI-heavy games, we go to Godot, to the extent of training Unity developers on it instead of trying to make do with Unity's UI.
I do agree on the other points though, Godot is still very young as a community and so the asset store is pretty vacant.
0 - https://godotengine.org/article/csharp-wasm-aot
50% of the titles, including some first party ones, are using Unity.
If I was an Unreal user, I wouldn't let go of Blueprints and GC enabled C++.
Do you have a firm grasp on the weaknesses and gaps in Unity as well as it's strengths? Would you say you have a clear understanding of the breadth of features and how much they matter to it's primary markets?
I'd love you to be right but you'll need to do more to convince me that you have some special insight in this matter.
It doesn't seem like a bad choice but it's definitely not secured it's lead.
I want to like Godot but it feels like it's making the same mistakes Unity 5.x and earlier did with fixed shader and script languages that just aren't as useful as the languages they're abstracting. I want to use GLSL or Vulkan or DX12 not a custom language that will get in my way. Its a C++ engine that uses script runtimes like Unity did. This makes it hard to optimize across languages and why Unity went down the whole IL2CPP path. Now Unity is moving more and more features into C# with a custom C# compiler to better optimize with user code. Godot will have to succeed where Unity could not.
Godot looks very promising but if you don't think its an uphill battle or that Unity is easy to beat you're mistaken.