https://www.shadertoy.com/view/4dfGzS (or basically anything on that site)
How is that 400 lines of code.
Or this one which even generates the sound on the GPU
With the wide adoption of WebGL, it's a good time to get involved in graphics. Furthermore, GPUs are taking over esp. with the advent of machine learning (nvidia stock grew ~3x, amd ~5x last year). The stuff nvidia has been recently doing is kinda crazy. I wouldn't be surprised if in 15 years, instead of AWS, we are using geforce cloud or smth, just because nvidia will have an easier time building a cloud offering than amazon will have building a gpu.
These are some good resources to get started with graphics/games
# WebGL Programming Guide: Interactive 3D Graphics Programming with WebGL
# Book of Shaders
# Game Programming Patterns
HN's own @munificent wrote a book discussing the most important design patterns in game design. Good book applicable beyond games.
# Game engine architecture
# Computer graphics: Principles and Practice
This is more of college textbook if you'd prefer that but the WebGL one is more accessible and less dry.
# Physically Based Rendering & Real-Time Rendering
These discuss some state of the art techniques in computer graphics. I'm not going to claim to have really read them but from what I've seen they are very solid.
Also just kind of asking for curiosity, do you think a language like go or rust will become popular for developing game engines? I realize game programmers are anti-GC but what if GC technology advances that the performance drop is negligible I wonder.
> Also just kind of asking for curiosity, do you think a language like go or rust will become popular for developing game engines? I realize game programmers are anti-GC but what if GC technology advances that the performance drop is negligible I wonder.
I think it will but on some level, the needs of a game engine are different from the needs of say a DNS server. Jonathan Blow, the developer behind Braid (http://braid-game.com/) and Witness (http://store.steampowered.com/app/210970/) has been working on a language called Jai https://github.com/BSVino/JaiPrimer/blob/master/JaiPrimer.md even though he's aware of Rust and Go. He talks about some of his reasons in this video https://www.youtube.com/watch?v=TH9VCN6UkyQ
One of the things he mentions is that the game industry doesn't care about security that much (which I didn't realize until then but it makes sense) compared with a DNS server or something so his ideal language might have different design considerations than Rust.
Click on its timestamp to go to its page, then click 'favorite'. Favorite stories and comments are visible from your profile page. Note that these are public, so users can browse each others' favorites.
That depends on the type of engine :)
Many games will be perfectly fine in a GC language. (People do write JS games all the time :). But as you approach the limits o
f a devices performance, GC has an overhead that is visible.
You can work around it - there are patterns that essentially work around the collector by recycling objects - but it's quite a bit of engineering effort, and it's a very different style.
But the big issue in GC'ed environments is that you give up control over heap growth. Working with fixed memory budgets becomes a very difficult thing to achieve. And there are few things game developers hate more than unpredictability :)
If you want to look at ongoing efforts, Amethyst is trying in Rust: https://github.com/amethyst/amethyst
I usually just upvote. You can view your list of upvoted posts/comments in your profile page.
The shaders on ShaderToy aren't quite the same as straight OpenGL fragment shaders - there is quite a bit of boilerplate code built into the site that allows ShaderToy shaders to have access to stuff like mouse coordinates and audio signal info. (Expand the 'Shader inputs' area above the code block - none of those are available in OpenGL.)
But OpenGL also has its own library of functions: `texture2D`, `smoothstep`, `mix` and all the built-in vector math - and you can see all of these in action in the shader you linked. The ShaderToy boilerplate - in partnership with these libraries - is the reason the code is so concise.
This is almost Pixar level graphics running realtime in your browser.
My point is the shaders on shadertoy are exercises in doing something for fun with limits (one shader and a few inputs) and not about speed or optimization or the anything to do with the way any game engine would go about to get performance.
- Control of memory allocations and memory shape/characteristics
- Maximizing cache lines (due to architecture, memory footprint, etc.)
- Multi-threading, concurrency, parallel processing
- Data pipelining
- Predictable execution
- Fast, efficient, correct math (or possibility thereof)
- Compilation tweaking/assembler output tweaking
- Efficient IO
- Fast compression/decompression algorithms
- Integration with existing toolchains for graphics including middleware, apps, and shader languages
If your goal is having fun and learning a few things before moving on to C++, dabble in some JS graphics programming, but since you called our professional programming as no longer being compelling to start, I have to point out that this is both wrong and bad advice to someone new. If you are struggling with the concepts, you will struggle in any language. If you personally need instant gratification and a self-esteem boost, from this point of view JS might not be bad for learning, but it won't teach you a lot of vital things and will encourage some horrific practices. The presence of GC and the lack of a proper parallel and concurrent programming model poison the experience quite a bit.
Of course in the end, you can use things like transpiling, engines that may or might compile to JS as a runtime target, and so on and getting a running product that might be decent in the eye of the beholder. Personally, I've dabbled in JS enough to write a high-performance Voxel engine many years ago when I thought about making a game in JS as well as a skeleton of a 3D adventure game engine for a contract. I was productive enough but ran up against walls that weren't worth working around or just made things feel so kneecapped that I wondered why I was even using JS at all in its current state. The old adage holds that in the end, you can write anything in any language, but there is indeed value of selecting the best tool for the job. Even just making something run and at a reduced frame rate typically is something fundamentally different than being THE language for graphics programming.
In truth, when I first saw JS in the 90s, I thought there was no way it would reasonably do anything in 3D. Things are getting better, but I think you are dramatically underselling what people use and exists today in a professional environment. Your comment about 400 lines of code being that impressive and small seems to also hint you haven't dabbled that much in the area professionally. While your links are just fine and the above shader toy link is indeed impressive in the context of now, personally I was much more blown away when I dabbled in the demo scene and saw what people were doing with an Atari ST, Amiga, and PCs with no GPUs and on systems with less power than today's fitness trackers.
Anyway, I can't think of many compelling reasons to start in JS when IMO, it's better to learn to do things the proper way even if they can be a bit rough and punishing. JS will surely be more productive at the beginning, but also skip teaching you vital things you need to know as well as some of the fundamental primitives of graphics programming. This is the real world and no one in a professional environment cares if you are awesome in JS but don't understand the tools and best practices people actually use industry-wide. Moreover, you won't get far if you can't be productive from your first day because you tried to take the easy way out. If anything, just comparing the amount of resources for JS vs. C/C++, especially from professional vendors like nVidia, Microsoft, AMD, and so on makes me think at best, JS as a starting language for graphics programming only holds for web programmers. If anything this seems to makes things an extra layer of difficult.
However WebGL will outlive JS, when webassembly is introduced.
In some ways, just grabbing an engine like Unreal or Unity is a decent alternative to something like JS to learn (some even let you use JS or use other languages that also have traps). Big game engines leave a lot to be desired and at times abstracts too much or makes things like handling shader code annoying. Still, most larger game engines like these two are the closest thing to having LOGO for 3D games programming.
You can at least get stuff on the screen relatively quickly, learn a few things, and then start replacing it from there. I personally learned quite a bit way back when just decompiling or reverse engineering stuff from people much smarter than me.
Sometimes I feel like there's a lack of things like what the C64 provided for younger kids and adults today. I suppose as expectations have risen, so has complexity of getting going.
Although I understand the gist of this statement, I wish I had all the amazing (and cheap!) powerful stuff we have around now to play with when I was young. Also the effect of easy access to information on the internet now cannot be overstated. You can just pull up a youtube video on any subject you might be interested in immediately. It's amazing.
I remember manually typing out pages of C64 code from a magazine to generate a fractal. After typing for literally hours, the actual single fractal picture took hours to generate... Still satisfying in the end to witness it being generated pixel by pixel, but it was surely a lot of work and needed a lot of patience for a kid. Plus if I had mistyped any of that code, it would have been a big disappointment for sure. Kids nowadays have no idea how tough using computers was back then.
I spent hours, days, and weeks of my life on things related to graphics/games programming such as:
- Building mouse drivers from scratch or implementing them from alternative vendors
- Implementing/Working with DOS protected mode
- Manually compressing memory/implementing swapping
- Implementing blitting from scratch
- Implementing z-buffers from scratch
- Reverse engineering consoles to steal processing power from idiotic sources just to render a tiny bit more data or later, a few more polygons
- Spending a huge chunk of cash to throw in a math co-processor into my machine at home
- Debugging code for hours only to realize things like the problems are caused by seemingly unrelated problems such as tape media is at fault, the floppy is corrupt, or the file system doesn't work the way the vendor said it does in the specs
- Rendering a scene and then going home, only to come in the next day and see it is still not done
- Converting from 72 billion formats and interpreting, finding, and/or correcting corrupt data from each one
- Rewriting entire pieces of code bases to squeeze out several more bytes of EMS and XMS
- Implementing 2 or more graphics APIs for the same game. Thank you 3dfx, S3, and many others that pained me, not to mention at a higher-level, OpenGL, DirectX, and so on.
- Doing all my work on 1 platform, then loading it on another. Thank you SGI for taking years off my life.
- Writing matrix operations in pure assembler for the simplest of operations
- Having multiple workstations for reasons such as "this one has the Matrox card in it."
The list goes on. Yeah, I don't miss those days. But I learned a lot, we all did. And the barriers to entry definitely reduced the signal to noise IMO.
We can't even imagine what kinds of crazy stuff the next generation will come up with, with all the resources they have available now. The barriers to entry are still there, but the goalposts have moved significantly.
This is also available on the authors website: http://gameprogrammingpatterns.com/contents.html
NVidia's already there with the hardware:
Your JS plus WebGL comments are compelling.
Thanks for a golden comment.
It also really depends on what time period we're counting as the "start" and if we value older games or periods of higher but simpler productions vs. now. For instance, I wrote quite a lot of code against NES, SNES, Genesis, Atari, Arcade machines, C64, and so on in assembler. 68000-based systems alone are such a huge chunk of sheer volume of games and most of us wrote quite a bit in pure assembler with regard to graphics.
On other systems that were faster or for different game requirements, we definitely used a ton of C, only moving to C++ later. In the case of C++, it's almost hard to even call things C++ at times. To be honest, most decent game engines I've worked on essentially use C++ to practically rewrite the language and go to great lengths to avoid some of the primary selling points of C++. It's sort of pick and choose some good things about C++ or at least powerful things while avoiding some of the burdens of C with regard to games. Obviously general C++ programming is different, so keep in mind I'm only referring to professional games programming. As things have progressed, people are indeed using more of what the newer C++ standards offer, but most of the same things still hold true. Where one has a tough argument with regard to graphics is the fact that so much now happens on the GPU, which one can argue is its own thing given stuff like shader languages.