Hacker News new | past | comments | ask | show | jobs | submit login
Computer Graphics from Scratch (2017) (gabrielgambetta.com)
651 points by pcr910303 19 days ago | hide | past | web | favorite | 94 comments



Like many other programmers, I originally picked up programming because I wanted to write games. I spent time on my own in high school messing around with rudimentary 2D libraries in C++. That feeling of awe and accomplishment that was captured the first time an image successfully moves across the screen was what got me hooked.

After starting college I soon learned of the trials and tribulations of software engineers in the gaming industry. I changed my degree to Computer Engineering, took the embedded track, and recaptured that sense of awe by staring into the abyss of an Oscilloscope. Albeit, I did still take several graphics courses as electives (simply because I loved the subject).


Seems like many people start and end this way. Games are a great motivation to learn how to code. But once you do, you find that other subfields of CS are more interesting and useful. And you just get older. :)

But that is not reason to quit game development entirely. It is still a good hobby that makes you a much better programmer, because it touches so many areas of CS and math. 3d graphics, state machines, trees/graphs, pathfinding, multithreading, memory management, vector math, networking etc.


> But once you do, you find that other subfields of CS are more interesting and useful. And you just get older. :)

It is true that some people, as they get older, claim to find games less interesting. In my opinion, some of them simply jaded a bit, and would be a good thing for them if they tried to find enjoyment again in the small things. I know, because I was there too. It depends a lot on the culture.

There are also others that start claiming games are useless, immature and that the world will end if youngsters keep playing them.

Entertainment has been useful even before civilization existed, and those that enjoy the time to entertain themselves, do; even if it is in the form of working on dream projects. Yes, that is playing too.

As for your other point, there are objectively very few (if any) topics as interesting as games for CS people, given the so many areas of CS (and non-CS) they encompass at the same time. Hardly any other area of work touches so many domains. Only operating systems, browsers and CAD apps (and maybe Emacs ;) are close as vast.


Interesting is very subjective term. E.g. some find theory much more interesting than games


What do you mean by theory?


>It is true that some people, as they get older, claim to find games less interesting. In my opinion, some of them simply jaded a bit, and would be a good thing for them if they tried to find enjoyment again in the small things.

There are also people who still find enjoyment in small things, just not in games. It's not like computer games are that enticing beyond a certain point.

You can wonder around shooting aliens or casting spells, or exploring space colonies, in some commercial game that's basically the nth clone of 1000 others before it so many times before it loses interest...

It's like watching superhero movies for life. Yeah, people do it. But people also learn to appreciate more mature movie plots than "kid finds out they are unique, has responsibility to save world" or "millionaire with faults devotes time to fight supervillains", as their life experiences (e.g. kids, divorce, health scares, job trouble, mortality, love affairs, betrayal, etc) are not exactly the same as their brooding misunderstood teenager years when they wished they would "show everybody" anymore...


That is exactly what I was talking about! You are portraying games as "immature activities" like "superhero movies", rather than just "movies".

The same way there are more "mature" movie plots than those that you are (kind of) mocking, there are also more mature books than children stories, more varied music than pop summer songs and, indeed, more games than your "nth clone of shooting aliens".

By the way, I have had a nice daughter and health scares like anybody else, and no, that has nothing to do with maturity or with games becoming boring. There are many game genres and of course you like different ones when you are 15 vs. when you are 50. I literally thought like you in my early 30s, when I had literally zero free time and kept thinking "yeah I am past that, leave it to the young generation, I am responsible now".


The day you wake up and consider yourself "adult" is the day you start to die.


[flagged]


Hah! Because driving to the lake, playing poker and going to the movies in the mall makes oneself a very mature and serious person.

Not like those immature infants that play games. Got it!


>As for your other point, there are objectively very few (if any) topics as interesting as games for CS people, given the so many areas of CS (and non-CS) they encompass at the same time. Hardly any other area of work touches so many domains. Only operating systems, browsers and CAD apps (and maybe Emacs ;) are close as vast.

What is interesting is subjective, and I am not going to seriously tell anyone what is interesting or not. But there are certainly areas in CS with just as much or more breadth and depth than game development. As someone who works in AI now, I would say it covers even more areas of CS, and requires more math.

There is also computer security, which can be as high level as web app security, or as abstract as the number theory powering cryptography.

I love game development, and games, but it is a bit disingenuous to hold game development up as being the ultimate discipline in CS. It is certainly not.


If your definition of "interesting" is how much math it requires, I have bad news for you... :)

Anyway, neither AI nor security cover that much of CS (they are parts of CS, and there are many others). Games, however, heavily use both of them (and many other parts of CS).


AI covers many "parts" of CS.

The AI you use in games is very rudimentary and smaller in scope compared to AI used for applications in the real world.


I'm currently at a games studio. Graphics in general fascinate me, and I'd be happy at any job which allows for fast iterating and a visual interactive system to work on, where the underlying problems are difficult and performance sensitive.

A lot of the recent outcry over games industry being horrible is a bit exaggerated. I'm glad some of these issues are getting attention, but it's far from being like that everywhere.

It's really compelling work.


Recent? It's far from recent. There have just been some very public debacles lately that have been attributed to it but it's been an ongoing issue for a lot of people for a long long time.

Maybe you've been in a situation that hasn't been like that of others but you are not an entire industry of people. Confirmation bias and small sample size are things to watch out for.


> But once you do, you find that other subfields of CS are more interesting and useful.

I feel like the exploitative nature and below-industry living-working standards are what pushes people away from the field in the first place.

The video gaming industry makes in the same order of magnitude in revenue as the global movie market. It certainly isn't a small industry.


For me it was my big brother passing his Commodore 64 down to me as he got himself an Amiga. First it was just playing games, then doing some stuff in the built-in basic, and then following an assembly language course in a monthly magazine.

By the time I managed to display eight sprites on the upper part of the screen, trigger a raster interrupt, move the same sprites to the lower part of the screen and thus have it appear as sixteen sprites on screen, I was hooked. It was like being able to wield magic.


My thing was sprite-multiplexers. In theory it's very simple. You sort the sprites by y-position, then use raster interrupts to reprogram the sprite registers for "(spriteN++ & 7)".

The hardest part is sorting. It takes quite some time to sort data on a 6502. I "invented" my own sorting routine for this, but I later learned that most coders used bubblesort since most of the time sprites move just a little so it's mostly sorted already.

My routine used half the stack. I took ypos >> 2 and used that as an index to the bottom half of the stack. If the position was occupied I used the next pos etc. To collect I simply used a PLA(pop) instruction. There were holes of course, so you had to check for empty slots.

That routine took about the same time every frame.

32 sprites was not very hard to do, sorting-wise.


Yes, bubble sort was an obvious choice given it's easy to understand and implement, I came across this article some time ago which goes through many different ways to sprite sort for general multiplexing and presents 'continous insertion sorting' as the winner, was a fantastic trip down memory lane.

http://selmiak.bplaced.net/games/c64/index.php?lang=eng&game...


Similar story here, but on a ZX Spectrum :)


> After starting college I soon learned of the trials and tribulations of software engineers in the gaming industry.

Graphics is used well beyond the games industry! Anecdotally (talking to friends, getting recruiter emails, etc) Apple, Facebook, NVidia, and Google have been hiring like crazy for non-games graphics-related positions.


Google's comically bad at hiring for it though so I wouldn't count on them. When I interviewed there after having worked on graphics extensively at Apple and other places, they asked me how to load balance real time search suggestions and I failed, obviously.


Ohh I agree, but I had an entirely differn't experience.

I went through round after round of interview, passing the interviews without problem. But the team I was interviewing for wasn't able to get the information they needed, because I was just getting general CS interviews, so they kept trying to schedule more interviews.

Eventually Amazon reached out, I had some interviews and accepted a position within 2 weeks. I had to tell Google I wasn't going to do any more of their interviews.

They really need to improve their interview process for specialty positions, it really seems optimized to get recent CS grads into generic roles.


You don't understand. Interviewing at Google is not about recruiting outside talent. It's about making the inside talent feel special when we blow off another qualified candidate.... I was one of those special-feeling interviewers with 32 onsites and zero hires (I recommended 10 by the way). I finally quit that shitshow and left Google.


Anyone else have info. on this?

I started by writing games and game engines as a teenager (now ~14 years ago), reading "Real-time Rendering" and the "OpenGL Superbible". I've done a lot more with computer graphics since, writing pixel shaders for fun, building a web-based domain specific CAD tool (for a client) etc. Also been doing some computational geometry stuff more recently.

I haven't had much luck finding positions for 'graphics programmers' though. I've found things like 'lead engine programmer' for game companies—which is a bit out of my league, IMO.

But I'd love to work on things like the VR youtube player that Google did (doesn't have to be VR; 3D UI design is interesting to me regardless). Don't how to find these sorts of positions at companies like Apple/FB/Google etc., where I would hope they pay quite well (comments on that also welcome!).


A few projects I can think of requiring graphics programming at Google:

* Fuchsia was looking for graphics engineers at some point. I can't see a listing about graphics right now but maybe you could start as a mobile apps SWE and slowly move into graphics if you build the right contacts.

* Lots of Stadia jobs require Vulkan / OpenGL and general graphics knowledge, there maybe some of these offers that may be attractive for you! [1].

BTW I'm pretty sure even for specific roles as graphics programming, Google still hires generalists, so expect the same grind as everybody else has on the interviews (that is, whiteboards, algorithmic questions, etc etc).

1: https://careers.google.com/jobs/results/?company=Google&comp...


Perhaps I just got lucky, but I started my first games industry job almost a year ago and I've been doing graphics work for the vast majority of that time. I came in as a regular junior level Software Engineer, mentioned I liked graphics and tools development, and was mostly given graphics work as a result.

I've also been writing engine/graphics code for fun since I was a teenager, my copies of those books are 3rd edition and 5th edition, respectively :)


Hmm, yep, tools and graphics are what I'd be most interested in as well. 3rd and 3rd here :)


It's not super difficult, they are generally listed like any other job. Here is a few in a quick Google Jobs search:

https://careers.google.com/jobs/results/6656793930563584/ https://careers.google.com/jobs/results/5104577030586368/ https://careers.google.com/jobs/results/6469646855372800/

The other large companies should have similar postings, and obviously games studios.

The harder thing you might run into though is they are generally pretty senior roles, with expectations of quite a bit of existing rendering knowledge, so it can be hard to get a role straight into rendering. My suggestion is to try to find a general role on a team supporting a rendering product. And then to slowly move into that type of work.


Huh. Well, it has been a couple years since I last looked, but nothing turned up for me before.

> My suggestion is to try to find a general role on a team supporting a rendering product. And then to slowly move into that type of work.

Not sure if that's in reference to my background or just a general comment. I do have about three years experience doing graphics programming professionally (1 year computational geometry), and many more doing it as a hobby, so working my way back up to it is not exactly an appealing prospect.

I could potentially see that being sensible for me though, depending on details about the job not included in the description. Graphics programming is my deepest specialty, but I am primarily a generalist.

For example, here's a video of the CAD tool I architected, built (the features shown), helped design and hire other developers for etc.: https://www.youtube.com/watch?v=e21tqZebl60

I am actually concerned that the skill set involved there won't be easily usable to by Google scale companies though--that they'd prefer to have a few specialists for the different aspects of the project instead. So maybe this isn't such a good direction for me.


> Not sure if that's in reference to my background or just a general comment. I do have about three years experience doing graphics programming professionally (1 year computational geometry), and many more doing it as a hobby, so working my way back up to it is not exactly an appealing prospect.

It was mostly a general comment, but I think an important one. I've been a rendering specialist for about 10 years, and I still occasionally encounter the "rendering" role that expects me to have more knowledge of some special type of hardware or something, and I end up being under-qualified for, it's just that kind of field I guess.

Also, cool looking CAD tool!


I like your cad tool - I'm interested in 2D only, but you seem to have nailed the user interaction bits for constrained movement.

I'm primarily interested in algorithms/ heuristics for wire drawings (2D ECAD). Do you have any suggestions on where to look for further information?


Not really unfortunately :/ We were basically winging it without really digging into the research or anything, except for computational geometry algorithms.


Visual effects industry as well.


Also CAD, CAM, CAE.

Also many random smaller niches. I've been programming 3D graphics for machine learning (grabbing labelled data between videogame and D3D), for enterprise (too much 2D data + latency requirements, only 3D hardware did the job), for GIS (a competitor of google street view, image processing), for video broadcasting..


Which has just as many (if not more) trials and tribulations as the gaming industry.


Not for programmers


I don't agree.

Programmers are generally much better off than artists in both VFX and Games. But the worse pay and bad deadlines and time management proliferates in both fields.


The desire to write games came later for me. What got me into programming was the desire to write viruses- they were this magical thing that I viewed as a game. A game to enter a system, to stay there hidden, spread and poke around, completely on their own.

It was amazing world to imagine, but was way beyond what I could do at the time. And now, my ethics prevents me from even attempting it.


Author here. What a surprise to open HN and see this on the front page :) Happy to answer any questions.


I am preparing to teach a computer graphics course for the first time and have been reading up on WebGL, etc. It looks like this will be a nice intro to the lower level aspects of graphics. Thank you!

By the way, Eck's book at:

http://math.hws.edu/graphicsbook/

is REALLY helpful for learning WebGL.


Thanks for mentioning it.

Do you think that book is a good source to learn "traditional" OpenGL?


Traditional OpenGL is not useful except as a history lesson. It will give you bad practices when you then try and use a newer API (not even counting DX12/Vulkan; the bad practices will stick with you even if you try to just use DX11/modern OpenGL). WebGL is a pared down version of GLES, which is still not perfect, but a big step in the right direction.

I wish there was a better cross-platform intermediate graphics API; Vulkan has a lot of challenges. Metal is very close but it's Apple-specific. Dawn/WebGPU/wgpu seems to be a nice fit in the middle, but it's still in development.


So would that book be a good way to get into graphics programming? I see that it uses OpenGL 1.1


I suggest Anton's OpenGL 4[0] as a starter guide for modern GPU pipelines.

For a general graphics introduction I would recommend software renderer tutorials such as this JS one[1]. The problem that has made people split on whether to directly study modern OpenGL is that 1.x is much easier to configure because more parts of its pipeline are fixed-function. There are fewer lines of code involved and you have fewer episodes of "why doesn't it draw anything". Software rendering lets you get around that because you configure only exactly as much as you have built, and because you built it you understand it(to some depth). When you go to any current GPU API you have to grasp both what the hardware desires and the concepts you're looking for, and in practice the safest way to proceed is to very gradually build up and extend an example codebase so that you have a testing sandbox with easy to toggle modes for debugging, and then adapt that into the application.

[0] http://antongerdelan.net/opengl/

[1] https://kitsunegames.com/post/development/2016/07/11/canvas3...


I would not bother learning anything before OpenGL 3. The core stuff from 3 onwards is still reasonable for use today, and most changes build upon it rather than fundamentally changing the model, but the older stuff is quite different.


Teaching OpenGL 1.1 is like starting a chemistry course by going over alchemy. Begin with modern ideas, not the attempts to find phlogiston.


1.1 is 22 years old.


The best book that teaches "traditional" OpenGL is the official OpenGL programming guide (aka OpenGL Red Book) and the OpenGL superbible.


Yes, Eck's book is also a good intro to OpenGL. It is the rare book that is both readable and covers both OpenGL and WebGL. Of course, it is not as comprehensive as the Red book. But, in my opinion, that also makes it a more approachable introduction to OpenGL.


I've been following your Fast Paced Multiplayer articles[1] as I write my first networked game; it is really helpful, thanks!

[1] http://www.gabrielgambetta.com/client-server-game-architectu...


Thanks for your kind words, I'm glad you found it helpful :)


Thank you very much for this. I'm glad there still are people writing books like this one. When I was a teenager, I found a book by L. Ammeraal which was awesome reading. It taught how to build a simple 3D graphics renderer with hidden edges removal and other interesting features. Unfortunately, back then I was a Pascal fan, so it took me some significant effort to translate bits of the code from the book from C

Later, I found another book, can't remember the author, which described even more things, like z-buffering and so on. Very interesting and very useful.


http://www.opengl-tutorial.org/ is a pretty fantastic resource of learning this stuff.

Don't worry too much about opengl vs direct x vs vulkan. There are underlying principles that apply broadly to both.

Alternatively if you want a book that goes more in depth, but still allows you to take your first steps, opengl programming guide 8th+ edition is good.

Expect it to require some patience no matter what route you take. Graphics programming is finnicky.


Maybe it was "Computer Graphics: Principles and Practice", most commonly known as "the Foley Van-Dam"?


You're at Improbable. Tell us more about how you deal with scaling and dynamic region boundaries. How do you keep message traffic from bottlenecking the system? Can you really get a thousand avatars in a big room and not have the system choke? What would it take to port Second Life / Open Simulator to Spatial OS?

There are a ton of computer graphics intro books. Not so much on big-world architecture.


Maybe I'll write a book about Improbable some day - but more likely about the mythical origin story :) That said, at two different times I managed teams that made demos, tutorials and documentation, most recently https://twitter.com/gabrielgambetta/status/92135876311796121..., but there's people much more qualified than I am to write technical stuff about SpatialOS nowadays.

Region boundaries are a fascinating topic, see this for example: https://improbable.io/games/blog/distributed-physics-without...

But as Woody Harrelson would say, "let's keep it about Computer Graphics, people..." ;)


I'd like to see the book Animats mentioned as well. Perhaps you should encourage one of your colleagues to write it.


Just a heads up, it looks like your script that renders the math bits is failing to load intermittently, making the equations very difficult to read sometimes.

But: this looks really cool. I'm sending it to my brother who is a newbie programmer and wants to know about graphics.


Not sure if this is related to errors loading resources, but all the "Source code and live demo" links go to pages that seem to render the demo in javascript but don't show any code. Maybe the reader is intended to view source to see it?


Thanks for letting me know, I'll take a look. IIRC the script is hosted externally, maybe I should host a copy.

You're correct about the source code. Perhaps I should make it easier to see or download.


Thanks for writing this! I've been interested in computer graphics for a while and too many tutorial dive straight into the details without explaining key concepts. I'm eager to get started reading.


Small typo in 9th paragraph of the Introduction. Look forward to reading it


Hmmm, can't find it. Mind telling me what it is?


"While their sets of features have considerable overlap, they aren’t identical, so this book covers their specific strenghts" - it should be "strengths".


Ugh, thanks. Will fix.


Not the guy who mentioned it but:

>.....so this book covers their specific strenghts:

Reversed h and t in strengths.


No questions, but I'd like to thank you for a well written article that takes me back to when I was writing bzone style wireframe games as a kid.


Glad to hear you've found it interesting :)


thank you so much for writing this. this is one of my favorites on computer graphics. i love the simple and to the point approach. similar are the github.com/ssloy repos.

so, are you going to release a print version? (i am one of the few who still love them)

do you plan on adding more chapters after rasterization?


Thanks for your kind words :)

No definite plan for a print version, but it could be fun, if nothing else to have a copy on my own shelf! There's a few sections I need to complete first, though.

I do have plans to add a few more chapters covering more advanced topics, but again, nothing definite.


This is awesome! One thing I really appreciate is building up the mathematical concepts. For example, instead of providing a ray-sphere intersection formula, the tutorial talks about representing a ray and a sphere, and how the two representations must match at intersection points.

When I was asked to teach a workshop on computer graphics, I wanted to make the concepts accessible to those who were either rusty, or hadn't even encountered the necessary math. Given a limited amount of teaching time, I introduced only the most relevant math, but did so from first principles. Hopefully my treatment can be useful to others: https://avik-das.github.io/build-your-own-raytracer


I found it surprisingly hard to get access to the primitives mentioned in the intro, on a modern pc. How does one draw a pixel on a specific coordinate in a modern OS?


SDL is generally the most painless way to get a framebuffer you can draw in, cross-platform, and it also handles creating windows, reading input and playing sounds if you want to. You can use it from C and pretty much any other language has bindings. Also any GUI toolkit will have a widget that is a blank canvas (Qt and Wx both do, GTK should as well, but I haven't used it much). Or you can write in javascript and use canvas.


the best/easiest way is still use opengl/directx/etc but draw your texture in software, and then draw the texture full screen.

This is the approach I used to teach software rendering in gameswithgo.org we make a putpixel function as in the old days, and use it to build up a texture, which we draw each frame. The texture becomes what used to be the framebuffer.


Copying the entire framebuffer over the PCI bus makes me sad, though it's obviously way faster than when you could memory-map the framebuffer. These days you can't map texture objects in memory because texture storage on GPUs is its own black magic and textures are not stored as simple linear framebuffers. It would be fun to restrict yourself to a well-documented architecture, e.g. give everyone a cheap intel-based notebook and write out commands directly for the graphics chip. I guess while fun that would teach more about doing low-level hardware access than graphics themselves.

And while on the topic of changing hardware forcing us to change our algorithms, I wonder if teaching scanline rendering is even worth it these days. Every CPU supports SIMD and a SIMD-optimised software rasteriser is a very different beast than the classic scanline triangle rasterisers of old.

Probably the fastest software rasteriser for modern CPUs is OpenSWR [0], written by Intel mostly to keep themselves relevant in the data visualisation space until GPUs eat HPCs (GPUs still can't help you when your dataset is measured in hundreds of gigabytes of graphics data), but it scales perfectly fine down to desktop CPUs. The code for it is in the Mesa tree [1]. I wish I could explain exactly how it works, but it's a pretty big beast and I haven't had the time to read and understand all of it. Intel gave a presentation on it at the HPC developers conference back in 2015 [2]

[0] http://openswr.org/index.html

[1] https://cgit.freedesktop.org/mesa/mesa/tree/src/gallium/driv...

[2] https://youtu.be/gpYd18E3TWc?t=1531


Here's the sample code I pass around for doing that. It's a minimal example using SDL. AFAICT, it's pretty much the fastest way to get graphics from the CPU RAM to the screen.

https://gist.github.com/CoryBloyd/6725bb78323bb1157ff8d4175d...


Not quite "writing a pixel to the screen", but to play around, you can use a <canvas> tag in HTML, like my demos do; you can use SDL, as another poster commented; or you can write some trivial code to write a PPM file to disk (or use something like libpng) to create image files on disk.


A lot of proof-of-concept code renders to memory (and putpixel becomes screen[y*width+x]=pixel) and then saves the memory to a graphics file. If you've already been motivated to work with graphics you probably already know how to save a PNG file. If you have ImageMagick you can even render to a raw, uncompressed, non-standard graphics file and describe the format of the file in the 'display' command when you want to view the file. Also standard formats like BMP and TGA can be pretty easy to use without a library.


It never was that easy (hang on, were there BIOS functions for that?). Anything that requires drilling down to draw a dot is too slow anyway. Obviously blitting an offscreen bitmap just leaves you the problem of dealing with pitch.

It was even more exciting dealing with rendering high resolution images with only a single line of bitmap. #thegoodolddays


MOV AX, 13h; INT 10h :)


JS canvas has a great and very basic api, and golangs image packages let you set pixels pretty well too


PyGame is a really approachable way to play around with 2d graphics. I believe it's based on SDL.


Delphi makes it easy


Canvas in a browser


Yes, it is very rewarding to program your own graphics routines or your own simple implementation of a raytracer and then see the correctly rendered scene on screen for the first time.

I did this on my Amiga a lot time ago (1995), and then re-created the feeling using ShaderToy: https://www.shadertoy.com/view/lds3z8

It's so cool: A scene that took 10 minutes to render in 1995 - nowadays you can see that scene through a moving camera raytraced in real time in your browser!


Anyone know how this guide compares to tinyrenderer?

https://github.com/ssloy/tinyrenderer/wiki/Lesson-0:-getting...


Graphics was one of the hardest courses when I was in school for game development. We started with raytracing, did rasterization from scratch, then worked with DirectX and OpenGL. I formed lifelong friendships struggling through toon shading, bump mapping, and particles with HLSL shaders.


This ought to be super helpful for my toy rasterizer project written in Rust.

If anyone is interested, I'll post the link (it's crappy)

I suggest everyone to try implementing a rasterizer in a new language; it goes over a lot of language fundamentals and a lot of reasonably efficient memory manipulation (if you try to, at least) is required, which forces you to be intimate with the lang. Great Rust practice for me, at least.


Any dummies guide to shaders?



The capitalization of Scratch in the title made me first guess that they were using the educational programming language Scratch.


This is the capitalisation the OP used; the one I use myself is lowercase. Perhaps a mod (paging dang?) could fix this?


This is primitive, but not quite from scratch - that makes me think of TempleOS. This is more like, computer graphics without libraries.


Fair enough. It's definitely "from scratch" compared with most graphics courses which start at the OpenGL level, but as with whether a language is "low level", it's somewhat subjective.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: