After starting college I soon learned of the trials and tribulations of software engineers in the gaming industry. I changed my degree to Computer Engineering, took the embedded track, and recaptured that sense of awe by staring into the abyss of an Oscilloscope. Albeit, I did still take several graphics courses as electives (simply because I loved the subject).
But that is not reason to quit game development entirely. It is still a good hobby that makes you a much better programmer, because it touches so many areas of CS and math. 3d graphics, state machines, trees/graphs, pathfinding, multithreading, memory management, vector math, networking etc.
It is true that some people, as they get older, claim to find games less interesting. In my opinion, some of them simply jaded a bit, and would be a good thing for them if they tried to find enjoyment again in the small things. I know, because I was there too. It depends a lot on the culture.
There are also others that start claiming games are useless, immature and that the world will end if youngsters keep playing them.
Entertainment has been useful even before civilization existed, and those that enjoy the time to entertain themselves, do; even if it is in the form of working on dream projects. Yes, that is playing too.
As for your other point, there are objectively very few (if any) topics as interesting as games for CS people, given the so many areas of CS (and non-CS) they encompass at the same time. Hardly any other area of work touches so many domains. Only operating systems, browsers and CAD apps (and maybe Emacs ;) are close as vast.
There are also people who still find enjoyment in small things, just not in games. It's not like computer games are that enticing beyond a certain point.
You can wonder around shooting aliens or casting spells, or exploring space colonies, in some commercial game that's basically the nth clone of 1000 others before it so many times before it loses interest...
It's like watching superhero movies for life. Yeah, people do it. But people also learn to appreciate more mature movie plots than "kid finds out they are unique, has responsibility to save world" or "millionaire with faults devotes time to fight supervillains", as their life experiences (e.g. kids, divorce, health scares, job trouble, mortality, love affairs, betrayal, etc) are not exactly the same as their brooding misunderstood teenager years when they wished they would "show everybody" anymore...
The same way there are more "mature" movie plots than those that you are (kind of) mocking, there are also more mature books than children stories, more varied music than pop summer songs and, indeed, more games than your "nth clone of shooting aliens".
By the way, I have had a nice daughter and health scares like anybody else, and no, that has nothing to do with maturity or with games becoming boring. There are many game genres and of course you like different ones when you are 15 vs. when you are 50. I literally thought like you in my early 30s, when I had literally zero free time and kept thinking "yeah I am past that, leave it to the young generation, I am responsible now".
Not like those immature infants that play games. Got it!
What is interesting is subjective, and I am not going to seriously tell anyone what is interesting or not. But there are certainly areas in CS with just as much or more breadth and depth than game development. As someone who works in AI now, I would say it covers even more areas of CS, and requires more math.
There is also computer security, which can be as high level as web app security, or as abstract as the number theory powering cryptography.
I love game development, and games, but it is a bit disingenuous to hold game development up as being the ultimate discipline in CS. It is certainly not.
Anyway, neither AI nor security cover that much of CS (they are parts of CS, and there are many others). Games, however, heavily use both of them (and many other parts of CS).
The AI you use in games is very rudimentary and smaller in scope compared to AI used for applications in the real world.
A lot of the recent outcry over games industry being horrible is a bit exaggerated. I'm glad some of these issues are getting attention, but it's far from being like that everywhere.
It's really compelling work.
Maybe you've been in a situation that hasn't been like that of others but you are not an entire industry of people. Confirmation bias and small sample size are things to watch out for.
I feel like the exploitative nature and below-industry living-working standards are what pushes people away from the field in the first place.
The video gaming industry makes in the same order of magnitude in revenue as the global movie market.
It certainly isn't a small industry.
By the time I managed to display eight sprites on the upper part of the screen, trigger a raster interrupt, move the same sprites to the lower part of the screen and thus have it appear as sixteen sprites on screen, I was hooked. It was like being able to wield magic.
The hardest part is sorting. It takes quite some time to sort data on a 6502. I "invented" my own sorting routine for this, but I later learned that most coders used bubblesort since most of the time sprites move just a little so it's mostly sorted already.
My routine used half the stack. I took ypos >> 2 and used that as an index to the bottom half of the stack. If the position was occupied I used the next pos etc. To collect I simply used a PLA(pop) instruction. There were holes of course, so you had to check for empty slots.
That routine took about the same time every frame.
32 sprites was not very hard to do, sorting-wise.
Graphics is used well beyond the games industry! Anecdotally (talking to friends, getting recruiter emails, etc) Apple, Facebook, NVidia, and Google have been hiring like crazy for non-games graphics-related positions.
I went through round after round of interview, passing the interviews without problem. But the team I was interviewing for wasn't able to get the information they needed, because I was just getting general CS interviews, so they kept trying to schedule more interviews.
Eventually Amazon reached out, I had some interviews and accepted a position within 2 weeks. I had to tell Google I wasn't going to do any more of their interviews.
They really need to improve their interview process for specialty positions, it really seems optimized to get recent CS grads into generic roles.
I started by writing games and game engines as a teenager (now ~14 years ago), reading "Real-time Rendering" and the "OpenGL Superbible". I've done a lot more with computer graphics since, writing pixel shaders for fun, building a web-based domain specific CAD tool (for a client) etc. Also been doing some computational geometry stuff more recently.
I haven't had much luck finding positions for 'graphics programmers' though. I've found things like 'lead engine programmer' for game companies—which is a bit out of my league, IMO.
But I'd love to work on things like the VR youtube player that Google did (doesn't have to be VR; 3D UI design is interesting to me regardless). Don't how to find these sorts of positions at companies like Apple/FB/Google etc., where I would hope they pay quite well (comments on that also welcome!).
* Fuchsia was looking for graphics engineers at some point. I can't see a listing about graphics right now but maybe you could start as a mobile apps SWE and slowly move into graphics if you build the right contacts.
* Lots of Stadia jobs require Vulkan / OpenGL and general graphics knowledge, there maybe some of these offers that may be attractive for you! .
BTW I'm pretty sure even for specific roles as graphics programming, Google still hires generalists, so expect the same grind as everybody else has on the interviews (that is, whiteboards, algorithmic questions, etc etc).
I've also been writing engine/graphics code for fun since I was a teenager, my copies of those books are 3rd edition and 5th edition, respectively :)
The other large companies should have similar postings, and obviously games studios.
The harder thing you might run into though is they are generally pretty senior roles, with expectations of quite a bit of existing rendering knowledge, so it can be hard to get a role straight into rendering. My suggestion is to try to find a general role on a team supporting a rendering product. And then to slowly move into that type of work.
> My suggestion is to try to find a general role on a team supporting a rendering product. And then to slowly move into that type of work.
Not sure if that's in reference to my background or just a general comment. I do have about three years experience doing graphics programming professionally (1 year computational geometry), and many more doing it as a hobby, so working my way back up to it is not exactly an appealing prospect.
I could potentially see that being sensible for me though, depending on details about the job not included in the description. Graphics programming is my deepest specialty, but I am primarily a generalist.
For example, here's a video of the CAD tool I architected, built (the features shown), helped design and hire other developers for etc.: https://www.youtube.com/watch?v=e21tqZebl60
I am actually concerned that the skill set involved there won't be easily usable to by Google scale companies though--that they'd prefer to have a few specialists for the different aspects of the project instead. So maybe this isn't such a good direction for me.
It was mostly a general comment, but I think an important one. I've been a rendering specialist for about 10 years, and I still occasionally encounter the "rendering" role that expects me to have more knowledge of some special type of hardware or something, and I end up being under-qualified for, it's just that kind of field I guess.
Also, cool looking CAD tool!
I'm primarily interested in algorithms/ heuristics for wire drawings (2D ECAD). Do you have any suggestions on where to look for further information?
Also many random smaller niches. I've been programming 3D graphics for machine learning (grabbing labelled data between videogame and D3D), for enterprise (too much 2D data + latency requirements, only 3D hardware did the job), for GIS (a competitor of google street view, image processing), for video broadcasting..
Programmers are generally much better off than artists in both VFX and Games. But the worse pay and bad deadlines and time management proliferates in both fields.
It was amazing world to imagine, but was way beyond what I could do at the time. And now, my ethics prevents me from even attempting it.
By the way, Eck's book at:
is REALLY helpful for learning WebGL.
Do you think that book is a good source to learn "traditional" OpenGL?
I wish there was a better cross-platform intermediate graphics API; Vulkan has a lot of challenges. Metal is very close but it's Apple-specific. Dawn/WebGPU/wgpu seems to be a nice fit in the middle, but it's still in development.
For a general graphics introduction I would recommend software renderer tutorials such as this JS one. The problem that has made people split on whether to directly study modern OpenGL is that 1.x is much easier to configure because more parts of its pipeline are fixed-function. There are fewer lines of code involved and you have fewer episodes of "why doesn't it draw anything". Software rendering lets you get around that because you configure only exactly as much as you have built, and because you built it you understand it(to some depth). When you go to any current GPU API you have to grasp both what the hardware desires and the concepts you're looking for, and in practice the safest way to proceed is to very gradually build up and extend an example codebase so that you have a testing sandbox with easy to toggle modes for debugging, and then adapt that into the application.
Later, I found another book, can't remember the author, which described even more things, like z-buffering and so on. Very interesting and very useful.
Don't worry too much about opengl vs direct x vs vulkan. There are underlying principles that apply broadly to both.
Alternatively if you want a book that goes more in depth, but still allows you to take your first steps, opengl programming guide 8th+ edition is good.
Expect it to require some patience no matter what route you take. Graphics programming is finnicky.
There are a ton of computer graphics intro books. Not so much on big-world architecture.
Region boundaries are a fascinating topic, see this for example: https://improbable.io/games/blog/distributed-physics-without...
But as Woody Harrelson would say, "let's keep it about Computer Graphics, people..." ;)
But: this looks really cool. I'm sending it to my brother who is a newbie programmer and wants to know about graphics.
You're correct about the source code. Perhaps I should make it easier to see or download.
>.....so this book covers their specific strenghts:
Reversed h and t in strengths.
so, are you going to release a print version? (i am one of the few who still love them)
do you plan on adding more chapters after rasterization?
No definite plan for a print version, but it could be fun, if nothing else to have a copy on my own shelf! There's a few sections I need to complete first, though.
I do have plans to add a few more chapters covering more advanced topics, but again, nothing definite.
When I was asked to teach a workshop on computer graphics, I wanted to make the concepts accessible to those who were either rusty, or hadn't even encountered the necessary math. Given a limited amount of teaching time, I introduced only the most relevant math, but did so from first principles. Hopefully my treatment can be useful to others: https://avik-das.github.io/build-your-own-raytracer
This is the approach I used to teach software rendering in gameswithgo.org we make a putpixel function as in the old days, and use it to build up a texture, which we draw each frame. The texture becomes what used to be the framebuffer.
And while on the topic of changing hardware forcing us to change our algorithms, I wonder if teaching scanline rendering is even worth it these days. Every CPU supports SIMD and a SIMD-optimised software rasteriser is a very different beast than the classic scanline triangle rasterisers of old.
Probably the fastest software rasteriser for modern CPUs is OpenSWR , written by Intel mostly to keep themselves relevant in the data visualisation space until GPUs eat HPCs (GPUs still can't help you when your dataset is measured in hundreds of gigabytes of graphics data), but it scales perfectly fine down to desktop CPUs. The code for it is in the Mesa tree . I wish I could explain exactly how it works, but it's a pretty big beast and I haven't had the time to read and understand all of it. Intel gave a presentation on it at the HPC developers conference back in 2015 
It was even more exciting dealing with rendering high resolution images with only a single line of bitmap. #thegoodolddays
I did this on my Amiga a lot time ago (1995), and then re-created the feeling using ShaderToy: https://www.shadertoy.com/view/lds3z8
It's so cool: A scene that took 10 minutes to render in 1995 - nowadays you can see that scene through a moving camera raytraced in real time in your browser!
If anyone is interested, I'll post the link (it's crappy)
I suggest everyone to try implementing a rasterizer in a new language; it goes over a lot of language fundamentals and a lot of reasonably efficient memory manipulation (if you try to, at least) is required, which forces you to be intimate with the lang. Great Rust practice for me, at least.