Hacker News new | past | comments | ask | show | jobs | submit login
How to Start Learning Computer Graphics Programming (erkaman.github.io)
754 points by phodo on Jan 6, 2019 | hide | past | favorite | 120 comments

For anyone looking for a great starter codebase that you can poke around in to learn the fundamentals of CG, check out Scotty3D [0], the base code for CMU's computer graphics class 15-462/662 (I am a TA for this class). It is a supplemental codebase to the lecture slides [1], which are all available online. The idea is that Scotty3D is a full 3D modeling suite akin to Blender or Maya, but all of the actual functionality is stripped out, waiting to be implemented by students.

It includes code for 3D model loading, linear algebra, basic (OpenGL) rendering, and UX, and provides stubs for you to implement:

    - local geometry operations (ex: edge split, 
      vertex/edge/face bevel, vertex/edge/face delete, etc)
    - global geometry operations (ex: upsampling via catmull 
      clark subdivision, downsampling using quadric error, etc)
    - a path tracer that supports Diffuse/Mirror/Glass 
      materials with Fresnel reflections and environment 
      lighting (ex: you can render the cornell box)
    - an animation suite that implements skinning, spline 
      interpolation, and basic simulation (wave sims)
There's a lot of documentation in the wiki [2] to help you get started. A great resource, even if you can't take the class yourself!

[0] https://github.com/cmu462/Scotty3D

[1] http://15462.courses.cs.cmu.edu/fall2018/lectures

[2] https://github.com/cmu462/Scotty3D/wiki

Ah, the good old OpenGL fixed function pipeline, deprecated for over 11 years now.

For something a bit more modern, I'd recommend [0], but one might argue that old OpenGL is easier to learn since you don't have to setup your own shaders.

[0] https://learnopengl.com/Introduction

To clarify, the point of Scotty3D is not to teach OpenGL -- the students do not write any OpenGL, DirectX, Vulkan, etc in the class. The OpenGL that is there is simply used to render the 3D models and the UI, so updating that code is pretty low priority.

One of my longer-term goals as a TA for the class is to update Scotty3D to Vulkan or modern OpenGL.

EDIT: I wanted to expand on this point, as this is actually an important part of the philosophy of the design of the course. As the OP argues as well, it more important to learn the fundamentals of CG theory (eg rasterization, the rendering equation, solving ODEs/PDEs) than the specifics of any particular implementation (OGL, DX, etc). After taking 462, many students (including myself!) take the class 15-466 Computer Game Programing [0], which goes deep into more modern OpenGL implementations (admittedly, it's OGL 3.3, but it still covers shaders/VBOs/other important concepts that translate to modern APIs).

[0] http://graphics.cs.cmu.edu/courses/15-466-f18/

I wanted to support what you're saying. I've worked in visual effects for over a decade now. While I've dabbled with OpenGL, I think the only practical application I've had was taking a stab at writing a PyOpenGL widget for viewing Alembic models in a custom PySide asset browser for a studio. This was when Alembic was much less mature and it never ended up getting used. However, I have done a lot of dealing with color spaces, debugging/optimizing scanline and ray casting renderers, computing/storing surface normals, projection and other space transforms, and simulations. I've written toy versions of a lot of those things, but mostly was debugging back box systems written by others or troubleshooting assets generated or consumed by one of these tools.

Even if you're talking about game engines, there's still a whole lot more to learn. This game engine book [1] has one chapter about the rendering engine and 16 more about other topics. Each chapter in there is at least one hefty book to get a good working knowledge of the topic.

It's great you have a resource where people can learn and experiment with these other things without having to learn to write all the code around it.

[1] https://www.gameenginebook.com/toc.html

> Even if you're talking about game engines, there's still a whole lot more to learn. This game engine book [1] has one chapter about the rendering engine and 16 more about other topics. Each chapter in there is at least one hefty book to get a good working knowledge of the topic.

Which is why when people without game industry experience start discussing 3D APIs adoption, they loose on how little the APIs actually influence the whole engine codebase.

There's something charming and engaging about the "legacy" fixed function pipeline that we've lost with our increasing focus on lower and lower level APIs. The ability to have a 10 line hello-world program that draws a colored triangle on the screen is magical, and encouraging to beginners, and that experience can't be replaced by the massive boilerplate and "copy-paste-this-stuff-dont-worry-about-what-it-does-yet" you need to do in order to do graphics the more modern way.

With Apple working to eliminate all traces of OpenGL with Metal, and Microsoft already having abandoned it close to two decades ago, I feel it's close to the end of the road for fixed function OpenGL. It was a wonderful part of graphics development history that, sadly, future beginners will likely not be able to experience.

One more instance of how the interests of vendors and the interests of developers are not aligned. Microsoft and Apple don't want you learning portable skills -- they want to limit your future prospects to developing only for their specific platform.

noob here - how come no one is making a cross platform API to abstract away this stuff? whenever i read about opengl or vulkan or metal or whatever w/ the tutorial going "learn this engine to bypass complexities of bare api usage", it's my first thought

> noob here - how come no one is making a cross platform API to abstract away this stuff?

... but there are hundred of cross-platform APIs to abstract this stuff - unity3d, unreal engine, qt3d, etc...

Unity or Unreal aren't 3D rendering APIs, they're game engines

Vulkan is supposed to be the cross-platform API, but Apple isn't supporting it (and doing their own thing with Metal, as per usual). From what I've heard, Vulkan was originally "OpenGL 5" so while OpenGL continues to exist Vulkan is effectively its successor. There is MoltenVK, which allows Vulkan applications to run on top of Metal.

SFML (Simple Fast Media Library) and SDL are cross platform and hide most of OpenGL boiler plate. I've used SFML but just to make Shaders and output some text.

I think you're correct that the old GL API made it quick to get stuff working, but the modern GL is much nicer once you get past the overhead of setting up all the VBOs etc. That only needs doing once, and then you're set.

I've recently been following along a vulkan tutorial to get started with that, in the odd evening over the last few weeks. I'm six chapters in, and I've yet to even draw a single triangle. That's still about five chapters away. While I can appreciate that the flexibility of the setup to remove much of the implicit state contained within the GL state machine is good, I can't help but wish for a wrapper to just make it work for a typical scenario, and let me render stuff with a minimum of fuss.

I'm unsure about where Metal will fit in the future. No matter how great it is, it's a vendor-specific proprietary API and I suspect that Vulkan will be the next cross-platform API which will wrap Metal or DX12 when a native Vulkan driver isn't available.

I hope it's not too off-topic, but I feel this sort of thing has happened before with developing GUI programs in Visual Basic, drawing to the screen with turtle graphics, (or instead whatever routines BASIC tended to have). Curious if others share the sentiment with other examples too.

Oh yeah, totally. Maybe I'm looking at it with rose-colored glasses because I was younger, but the way I remember it personal computing used to be about enabling users. At some point there was this huge attitude shift towards being condescending toward users and treating them like cattle. So now, instead of trying to bridge the gap between computer "user" and computer "programmer", we forcibly drive a giant wedge between them.

The underlying issue is that many novice users don't see the problem with being condescended to, even when this severely inconveniences the more "developer-like" power users we used to enable. That "wedge" is just what the Eternal September of personal computing looks like.

I wrote a small booklet on modern OpenGL for the class I'm TA'ing:


Something as simple as the comment "//based on https://wiki.libsdl.org/SDL_CreateWindow" can save your academic career from ruin.

The students have to have citations to the docs if they went to help on a certain line of code. I wonder if that is normal these days.

I went to university ~'94, and we were certainly told in no uncertain terms that we were expected to cite anything we copied, code not an exception.

I don't think it means if you call SDL_CreateWindow, if that's what you mean with "on a certain line of code", but if you were to for example implement a window creation function by copying the contents of SDL_CreateWindow and adapting it.

EDIT: Citing everything is extra important when working on projects for courses, because even when its overzealous in terms of what is legal the point of doing the work is to show that you understand the concepts properly, and then its important for whomever going through your code to be able to tell which bits you actually worked on.

I don’t see that text in the parent comment or in either of the three linked pages. Where are you referring to?

It's on http://graphics.cs.cmu.edu/courses/15-466-f18/ linked in the response further down, in the "Don't steal" section.

That's actually a really smart idea, nice way to learn some of these concepts. Cheers for the link.

Thanks for bringing this to my attention! I've been very interested in the work the "Geometry Collective" is putting out there (discrete exterior calculus and related applications). Nice to see a course covering these fundamentals too. Coming from a different side of the computational world, I might just build a subdivision program from these bones to get thoroughly immersed.

I think the author of this guide doesn't remember the mindset that a beginner is in when they first start learning graphics programming (or really any subject).

Why is someone reading this tutorial in the first place? It's likely because they have some end goal in mind, and are looking to get from 0 to that goal as quickly as possible. For most readers, this is probably making a game. Following the advice of this guide will give the reader a deep understanding of how 3d graphics works, but it won't bring them much closer to making a game than they were when they started.

Why am I bothering to point this out? Because countless guides like this exist that confuse neophytes into believing that the subject they wish to learn is difficult and requires learning countless prerequisites before they can get started tangibly moving towards their goal. It's demotivating. A lot of the knowledge in this guide is good to know but ultimately not immediately relevant to someone seeking to achieve some higher level graphics goal. It feels like the equivalent of trying to teach someone programming by starting them at assembly.

I personally think that this guide should be directed towards someone who has had experience noodling around with OpenGL or a game engine and wants to reach the next level. In order for any of the lessons this guide wishes to teach to be impactful, the reader needs the necessary context to understand why learning them is relevant to 3d graphics and by extension the reader's end goal. Starting off by making a ray tracer is a cool idea for example, but does the author honestly believe that it's more useful than just learning OpenGL first? It might be more "confusing", but they're going to have to learn OpenGL (or some game engine) at some point whether they want to or not if their goal is to make a game.

It depends on what you mean by “graphics programming”. If that means “implement a game on top of a tall stack of abstractions” then sure, this isn’t the answer you’re looking for, but I’d call that “game programming”.

If “graphics programming” is implementing the rendering engine for the game (recall his final goal is “draw a triangle”), I believe his point is that it’s easier to learn the required APIs for that if you understand the concepts behind them first (rather than just leaping into meaningless copypasta), and the best way to learn those concepts is to work with them directly.

Thanks for writing this. I've written some similar posts before but felt like something was off and didn't end up publishing it, and I think this might be it: it should be more goal oriented than a complete and correct piece.

Start with WebGL, Shadertoy.com, and jsfiddle.net.

No other tooling needed until you've got the basics of vertex buffers and shaders down.

(20 year graphics veteran here who's boostrapped some coworkers)

One other bit of advice - a lot of the tutorials out there start with matrix math and eventually get around to drawing a triangle.

I recommend reversing that - ignore the math and just get a triangle on screen using a trivial shader (takes a few dozen lines of code in WebGL), then learn about how shaders and matrices work by applying them to your triangle.

Absolutely this. I did it the other way because I didn't know better, and it was remarkably boring for the first several days of trying to decipher tutorials. It would've been much more fun if I had just thrown a triangle on the screen and started playing around.

What about 2D graphics? Sprites? Old school stuff? I keep meaning to fire up DOSBox-X and go through some old tutorials. I really just want to put pixels on the screen and work my way up from there - everything else just seems to get in the way.

Personally I'd recommend grabbing an old copy of qbasic with dosbox and going to town on some old tutorials here http://www.petesqbsite.com/sections/tutorials/graphics.shtml

I know some would say it's a bad language, but back in the day it was incredibly accessible and spawned a lot of shared code and tutorials. If you focus on the techniques and algorithms, treat the language as pseudocode rather than "the right way to do things", you can have a lot of fun and learn quickly things you can translate to whatever your language of choice is.

I think most of those will work on QB64, too. So you could end up end with fast, native executable files on Windows, MacOS, and Linux. That doesn't matter much for those tutorials, but QB64 might be ever so slightly quicker to get up and running than DosBox + QBasic.

Great point! I had forgotten about QB64. Heck, I may give it a go myself. This thread gave me an itch for software rendering again.

Javascript + HTML5 Canvas is an easy start, and if you want to do even older-skool stuff you can treat the canvas as just a block of pixels and write your own 2D renderer on top of it.

Go with Processing, the API is very simple, and YouTube has plenty of videos demoing various projects:


Processing is amazing (and what I switched to after I started hitting Flash performance limitations). They have great documentation and tutorials on their website too, and I felt it's little IDE made it really easy for beginners to get started with it.

> I really just want to put pixels on the screen and work my way up from there

That really isn't how modern, accelerated graphics works - even 2D accelerated graphics! "Sprites" are slightly closer to the mark, but it goes far beyond that.

The PC never had sprites

The entire framebuffer can be thought of as a huge sprite. Modern RAMDACs can scan multiple overlapping arbitrarily sized framebuffers (layers) simultaneously, while doing bilinear filtering, merging alpha, colorspace conversion, rotation, LUT, etc.

All that happens on the fly without intermediary frame buffer composition in between. The output goes directly to the display (over HDMI, eDP/displayport, DVI, etc.).

Yeah, and of course mouse cursor is a sprite in the traditional sense. Although on modern HW you could implement mouse cursor as a hardware layer.

Isnt the mouse pointer actually a hardware layer in most OSes nowadays?

Yeah, it is. Sprite originally meant a hardware layer, back from the ancient times. Nowadays it also sometimes means rasterized 2D objects.

C64, Amiga, MSX, NES, SNES, Sega Master System, etc. all supported hardware layer sprites back in the eighties.

What's the difference? Size? If a hardware layer could be configured to be 32x32 pixels and offset to any position, wouldn't it become a sprite?

If you're using a decent graphics driver, the mouse pointer is a sprite.

You can do it with SDL or even just Canvas and JS. Later on you'll converge onto "3d as 2d" as you try to optimize and leverage shaders for high performance effects like color grading, blur, glow, etc.

I've been dabbling with 2D graphics recently and found PyGame to be well suited for experimentation. You'll need to hold your nose a bit (API is a bit clunky in places and far from Pythonic) but it's relatively trivial to get some pixels on screen.

Start with SDL.

Agreed. With SDL you can get a loop that copies a 2D array of pixels from main mem to the screen every frame about as fast as possible in a single page of code. If you want to explore software rendering, that's the way to go.

SDL also gives you access to OpenGL, so you can do more with it than just blit textures.

in my gameswithgo.org video series, when we start on graphics, that is how I do it. we implement putpixel, and do some basic 2d rendering by hand. bilinear filtering, alpha blending, etc.

IMO Shadertoy is not a good place to start. The techniques shown on shadertoy are fun to play with but they have almost nothing to do with the techniques used to make shipping products. Drawing an entire forest in a single shader is amazing but it runs at 0.2fps on a machine that can run GTA5 at 30-60fps

I would agree - Shadertoy is really cool, but it fits into the category of "they did it because they could, not because they should." There are techniques used in Shadertoys that are relevant elsewhere, though the learning curve is really steep in terms of comprehending what most Shadertoys are doing.

And for the love of deity, don't start with Vulkan/Metal/DX12.

That really depends. If you know graphics concepts, but need to actually learn an API, there is nothing wrong with starting with Vulkan. It's a lot to figure out to make something practical with it, but it's expected.

I'd say, a good way to do it, is first learn theory, and apply it using simple tools that let you just draw on screen (like SDL). Then learn a GPU targeted API.

I second this heavily. I've implemented a 3d scene renderer for 3D CAM software in the past and have been wanting to get back into some 3D things to practice math as I've been feeling a bit behind where I used to be.

I spent a while getting an OpenGL environment setup in VisualStudio and got frustrated with the tooling as I've also been spending much of the last 3 years doing trivial web development. I ran across shadertoy and was able to whip up much of the core of the pathtracer there and very easily move it over to it's own project supper easily utilising REGL (https://github.com/regl-project/regl) in ~100 lines that included a camera, dynamic shader compilation based on a scene, etc...

It depends on your motivation but if you really want to learn about graphics programming (as opposed to learning an API) then I think the best way to do so is to remove any API from the picture. There is an absolutely phenomenal book that does just this: Andre LaMothe's 'Tricks of the 3D Game Programming Gurus'. It was published in 2003 but when you remove APIs from the picture it's just as relevant today and will remain so for the foreseeable future.

The book starts with little more than plotting a pixel on the screen. By the end you'll have a complete lit, shaded, 3D game engine that you've written entirely from scratch. And in the process you will learn absolutely everything that goes into it.

And one thing I'd add is that this might sound somewhat overwhelming but it's really not. The book is extremely well written and clearly was a labor of love. If you get some pleasure out of math and code, you'll have no technical troubles working your way through the book and in the end will be rewarded with an intimate, flexible understanding of graphics development that won't be hamstrung by dependence on a specific API.

[1] - https://www.amazon.com/Tricks-Programming-Gurus-Advanced-Gra...

I will highly recommend ray tracing in weekend http://www.realtimerendering.com/raytracing/Ray%20Tracing%20...

+1 The author released all the "ray tracing in a weekend" pdfs last year through a "pay what you want" model: https://twitter.com/Peter_shirley/status/984947257035243520

They're all in a google drive (hosted my Shirley). It was previously shared on HN.


I've been learning (mostly 2D) graphics as my side project for a couple years now. I spend a ton of time on this project. I still feel like a beginner. It is one of the more complex topics I've ever delved into.

I've crawled through more topics than I can list here. Many of them were not easy to learn, due to disorganized learning resources and materials.

Documentation and information is scattered, frequently outdated, sometimes sparse, and just downright messy. OpenGL and its many quirks and legacy versions muddy the waters too. I've considered trying to pull this into one big guide, hopefully resulting in something like the Filament docs [1].

Side note: macOS (my dev env) is a terrible place to write OpenGL, I've learned. Debugging tools are practically non-existent.

[1] https://google.github.io/filament/Filament.md.html

The most incredible educational experience of my life was this course I did at Columbia University while in High School on game programming. At the end of the course we made our own game with OpenGL, but for the first week or so we'd have basically math classes in the morning on he physics of lighting, and then in the afternoon we'd implement the things we did in C. It was where I was introduced to lots of math concepts, including vector math that I would not see in school for another year or so.

At any rate, at the end we built our own raytracer, and say a text file with vertices that was given to us produce an image on the screen from 100% code that we'd written ourselves. It was one of the most empowering, deep intellectual experiences I've ever had.

To echo this sort of experience, I recently bought the pragprog book The Raytracer Challenge, and it was similarly satisfying. I wasn’t really out to follow OP’s advice to “get into” graphics, but I saw the book release and wanted to give it a shot. To be able to see a 3D-rendered sphere appear on screen using code that I wrote from the very bottom up is deeply, deeply satisfying (and a great way to level up with the programming language of your choice).

Out of interest, does learning linear algebra in this way give you a deeper sense of linear algebra? Do you think you think you are able to formulate other problems more effectively in terms of LA as a result?

Absolutely. Like, for the average school kid learning about vector multiplication or dot products of things of this kind are so abstract and seem kinda pointless. But the way he taught it, we had a problem in terms of making things look real. And the math was motivated as a strategy to solving those issues. Based on the real world problem, it was great to develop intuition for the results.

I would say no. It's just affine transformations on a vector. Deeper knowledge requires actual study of LA.

I enjoyed going through The Book of Shaders[1] by Patricio Gonzalez Vivo. It comes with an interactive programming environment that's great if you're just getting started.

It's free but please consider donating if you find it useful.

[1] https://thebookofshaders.com/

3Blue1Brown has a great series on Linear Algebra. His explanations are so clear that by the 2nd or 3rd video you'll already understand how it applies to computer graphics.


I agree. I've had quite a bit of exposure to linear algebra through other textbooks and online courses, and the 3Blue1Brown series explains the intuition better than just about anyone. In particular, the video on determinants really crystallized the concept for me.

I picked up graphics maybe three years ago and have been mostly learnt all the stuff I needed through different sites scattered all over the internet.

For ray tracing Peter Shirleys books are very good.

OpenGL stuff are usually found at docs.gl, https://learnopengl.com/About.

For Physically Based Rendering (PBR) material checkout the bool Real-time Rendering and all the cool stuff has been coming out of Siggraph these past years, https://blog.selfshadow.com/publications/s2013-shading-cours....

If you can, go take an elementary linear algebra course or prepare to spend time studying these. It is well worth it.

One thing I have found when hacking on my own engine (see here: https://github.com/Entalpi/MeineKraft) is that graphics APIs (especially OpenGL) can have a lot of hidden meanings or even old names still being used, etc. Try to find out how the API evolved and get the reasoning behind the design decisions. This way it makes more sense and it gets a bit easier to learn.


Shameless self-promotion:

On my YouTube channel, I have over 100 WebGL Tutorial videos, and over 50 3d math related videos.


The single most helpful thing I discovered for learning android opengl es graphics programming is that it does have error messages, after all! [abbrev code, use to lookup docs]

  int error;
    while((error=GLES20.glGetError())!=GLES20.GL_NO_ERROR) {
    msg += android.opengl.GLU.gluErrorString(error);
Also, the offical android opengl es 2.0 tutorial is wrong in subtle ways (after the non-subtle wrongnesses were corrected).

TIL "advice" can be countable: https://blog.oxforddictionaries.com/2014/05/30/advise-advice...

> Advice is mainly used with the first meaning, and in this meaning it is a mass noun (that is, it has no plural). The business/legal meaning, however, is a count noun: it has a plural form, advices.

When getting started with a new OpenGL project, the first thing you should do is to enable the debug output [0] w/ glDebugMessageCallback. You also need to enable the debug bit when creating a GL context.

This will give you human readable error messages (as in complete sentences of what went wrong) as well as some other info to pinpoint what went wrong and where.

Additionally, it's a good idea to use Renderdoc [1] or your $GPU_VENDOR's debugging and profiling tools.

[0] https://www.khronos.org/opengl/wiki/Debug_Output [1] https://renderdoc.org/

Good to know about! For openGL ES, it looks like it wasn't implemented until version 3.2, the most recent (I don't have a device with that version).


If anyone wants to play around with Computational Geometry and Python, I'd recommend as a great starting point: http://blancosilva.github.io/post/2014/10/28/Computational-G...

Also, Apress has a cool book about Python and graphics with some pretty intense source code found here: https://github.com/Apress/python-graphics

quick note on the first link...I think graphs are done using tikz: http://blancosilva.github.io/post/2010/12/12/using-tikz-as-a...

Here is my work in progress for intro OpenGL programming


It in python, doesn’t involve matrices. It really only requires high school math, with the exception of the z axis.

Hope it’s helpful!

Jamis Buck has a new-ish book out called "The Ray Tracer Challenge" that I've been slowly working through.

So far I really like the way it's written: everything is test-based, so you get a problem like: "given matrix X and vector Y, X * Y should be vector Z" and then you implement both the test and the code in whatever language you like.


There are so many comments here missing the point saying "why? just use "X" API" or "this is a waste of time when you will need 'X' API anyway" etc. but they miss the point. It's about 'how' graphics work, not building game engines.

Computer graphics is more about producing an image on a computer screen and whether it's for image processing or a game engine renderer it's the same process and very useful to know whichever field you want to pursue.

I would love to see a similar "jumping off point" for audio programming. Anybody know of one?

This is my own limited experience, so I hope an expert also answers. In the Handmade Hero video series/project, he codes a game engine from scratch. I didn't get very far into it, but I followed along coding it myself, using what I learned watching him. He writes it in a way that all the platform-specific code is in its own file, and he does it in DirectX, although other people have ported it to other platforms. Early in the series, he gets it to where you can create a sound buffer of samples and play it out to the speakers. So I played around with simply adding sine waves to generate things such as dial tones, busy signals, and DTMF dialing sounds. I would consider that "ground up" for the Windows platform. It should be pretty simple from there to load audio samples from files, adjust their volume, play them back faster/slower, and mix them together. And/or generate more complicated sythensized music. There is a book about building modular synthesizers, and I was thinking about using that but doing it in software.

The best motivator for me was to start building a Standard MIDI Format sequencer and test its output on MIDI files. The underlying MIDI standard is a well-understood protocol with many resources, while SMF adds some room for creative interpretation since it's meant to work across sample-based synths, FM, etc. I started with a simple square-wave beeper, added polyphony and volume envelopes, and built it up from there. By the end, the architecture I had initially built was melting down, but it was playing Soundfonts to some degree, and I could confidently claim to understand audio coding.

There's a lot of room to do things incorrectly and not notice in audio, which makes it forgiving to learn: e.g. playing a sample at a different pitch can be done badly by adding or removing arbitrary samples, but resampling it with minimal distortion is actually a fairly intensive DSP problem. Or when computing a volume, the linear amplitude usually isn't the parameter you want to work with(see every UI where volume changes imperceptibly until you reach the bottom 10% of the slider), and while there are computations to convert between linear and dB values, you still have to qualify it with one of these various references and suffixes[0]. There's a lot of crossover between audio and electrical signal terminology that can make it hard to follow this material.

So you can spend a lot of time doing simple things well!

[0] https://en.wikipedia.org/wiki/Decibel#Voltage

Very cool! Thank you! That gives me a lot more ideas.

Not as an online course or document, but I can highly recommend the following book:


It's fairly heavy going, but very comprehensive.

The best way to learn computer graphics programming is to first ask what you want to do with it. Most people interested in this topic have some end goal, be it just writing a game. In that case, writing raytracers and rasterizers may just be a waste of time, when the main interest isn't the interworkings of math, but to put something up the screen. For a reasonably wide range of modern tools, its absolutely not necessary to know any deep math to bring to screen what you want.

Scratchapixel[1] is also a helpful resource for explaining the basic concepts.

[1] http://www.scratchapixel.com/

I'll plug the materials for the one-weekend class I taught on computer graphics: https://avik-das.github.io/build-your-own-raytracer/

I didn't assume a particular mathematical background for the students, so I introduced the relevant math in what I hope was a motivated way. Hopefully it can help others as well.

My very first raytracer was http://canonical.org/~kragen/sw/aspmisc/my-very-first-raytra... and was quite a bit easier than I expected. My second one was a bit under 1K in Clojure: http://canonical.org/~kragen/sw/dev3/raytracer1k.clj or unobfuscated: http://canonical.org/~kragen/sw/dev3/circle.clj

I was pleasantly surprised by how easy it was to compute the pixels, but getting them on the screen was a huge hassle. So I wrote https://gitlab.com/kragen/bubbleos/tree/master/yeso to make it trivial. I haven't ported a raytracer to it yet; I'll do that soon!

I think this article, and the comments here, are a splendid example of how deep (and often confusing) 3D graphics can be.

One of the things I appreciated about the Glide3D API for the old VooDoo cards back in the day was how basic it was relative to the work of getting the 3D stuff on the screen.

There are several ways in which you can "learn" 3D graphics; You can get it by theory, you can get it by API, and you can get it by modelling.

If you're someone who is a theory person you should start at the beginning and learn linear algebra and vector arithmetic. You can learn about projections, and camera "views" and project your 3D scene onto a 2D surface whether it is a computer screen or a plotter page. Once you get there you can then do hidden line, and then hidden surface removal. You can learn about the 'Z' plane and the clipping frustrum which lets you not worry about things that won't appear in your projection. With a relatively simple array of coordinates type data structure you can project triangles or NURBs onto your surface and draw pretty pictures. Then you can start learning about light and all the many ways it interacts with materials. At the end of this journey you can take a 3D model and render it in a photorealistic way, one frame at a time.

The second way to learn 3D programming is the API path. Here you don't really learn too much of the math, although having that background will help you understand what the functions are doing. You learn an API like Unity or Vulkan. These APIs have taken a lot the basic 3D programming and done it for you, you need to learn how to write shaders which decorate your surfaces in ways that simulate light or water or fog Etc. Generally you construct a scene with calls to set up the frame, and then call 'render' with the appropriate screen resource referenced. Blam! instant 3D scene.

The third way you might come at this is modelling things, building models in Maya or some other modelling tool that are built with skeletons, which is simply a data structure that identifies how different parts of the model can move in relation to other parts. You probably don't have to know anything about 3D math, but you have to have a good eye for construction of the models and the area in which those models are placed.

There are so many ways to go at this particular quest, each have their own challenges and rewards.

Thanks for the extra links, I'm building a base software rendering engine in Go just for this purpose. It still needs work to get to a good 'base' but it can be used as is to start learning CG from your notes and links. https://github.com/MickDuprez/go-window cheers.

I invite everyone to try all of these techniques and math tools in ShaderGif [42]. Hopefully you'll learn and make nice gifs in the process. Learning new math concepts is even better when you create art and share it after!

[42] https://shadergif.com/

I've always been a fan of the NeHe GL tutorials. I learned programming starting with these 'back in the day' and really enjoyed it. Its modernized now but still maintained.


I don't think the hindrance when starting to learn graphics programming is the math or the lack of ideas of what to build. The biggest problem is that the most known APIs (DirectX, OpenGL and especially DirectX 12 and Vulkan) are so badly designed and so poorly documented that it deters anyone trying to start doing anything. All these API are like magic invocation you have to make the calls in just the right order or it doesn't work at all. There is no real specification of what goes underneath and it can't be because the actual implementation is written by different vendors (Nvidia or AMD) which only slightly abides to the specification.

This is the old-line "bottom up" approach.

It might be more appropriate today to work top down. Install Unreal Engine, build a simple demo, and look at the code it generated and you can modify.

Zero graphics experience, so I may be asking an idiotic question, but why learn anything other than a framework like UE?

I'm a web guy, so all I know is frameworks (Angular, React, etc.).

You'll ultimately become a better dev (e.g. higher quality code, faster, and more maintainable) as you learn more of the underlying stuff. For web dev definitely learn how to do everything in "vanilla" js, html, and css at least a few times.

Some people think better at a lower level and some think better with abstractions. I for one feel more 'at home' at the lower level but can live with higher level abstractions if I know how it's done at a lower level :)

Someone needed to build the framework. Depends what your goal is.

fwiw, pbrt can be read online for free since earlier last year: http://www.pbr-book.org/

If anyone would like to see ray tracer code, have a look at pov-ray http://povray.org Its a venerable old ray tracer but has some of the best features around and the code is very readable (once you've got the basics). It's a different way to do things than the vertex/shader style rendering pipeline.

For anybody looking for a minimal cross-plateforme modern OpenGL hello world with a lot of comments: https://github.com/eliemichel/AugenLight

By modern I mean as in OpenGL 4.5 Core Profile, using future-oriented APIs (e.g. named buffers). Any feedback welcome!

I'm always on the lookout for tutorials that go deeper than just implementing single one off things. Been meaning to go through Handmade Hero.


I'm currently enrolled in an online Computer Graphics course on edX that has been an excellent introduction to the subject: https://www.edx.org/course/computer-graphics

I've found SFML an easy way to get some stuff on the screen in 2D.

As far as true 3D graphics, I made a Minecraft clone using OpenGL... it was a nice project as, surprisingly, you end up using all the basic techniques as you tweak it, particularly when you try to get lighting and shadows working.

I'd definitely second the ray tracer recommendation, starting from a 2D canvas and building a ray tracer to render a shaded sphere was really an "ah-ha" moment, as the math is quite approachable and you can get a cool result with not much more than some trigonometry.

This sounds good for those who want to learn graphics programming for gaming and / or for animation etc. But where should a beginner start from to learn GUI programming without using existing frameworks or widget toolkits?

The article says "How to start" but it's more like "What to start with". Anyway, I'll really like to have a reference for good computer graphics lecture videos which I am failing to find

I found myself having to remember how to do matrix calculations. Before I could count to 100 I was going back and back until I was reviewing basic math on the Khan Academy. Embarrassing but that is where to start.

Is it worth it to get into this area as an experienced dev?

Does anyone know something similar but for data projects/science? Preferably something with practical data, not just the math.

I've heard good things about Andrew Ng's Coursera machine learning class, and it's free:


There is a free intro to data science class on Udacity as well:


Lastly, there's always Kaggle, which has plenty of resources to learn from, and competitions as well:


I can vouch for Andrew Ng's class. Its really top notch.

I'm in the same boat, I've been toying with kaggle recently. They have some guided tutorials done via notebooks with some data that seems to be heading towards playing with the data myself. I recommend it as it's interesting, has me playing with data, and is consumable le, but I don't have enough experience to give an informed opinion on it.

ISLR is a good one—it’s focused on practical theory, but there’s still some unavoidable math. Code exercises are simple (and ugly—stats professor code) but gives you the basics to build on.

Book/code: http://www-bcf.usc.edu/~gareth/ISL/

Lectures: https://www.r-bloggers.com/in-depth-introduction-to-machine-...

Also: this was recommended on HN recently and it’s the best intro to matrix algebra / basic math of ML that I’ve seen. Lots of very practical and relevant exercises that relate to real data science tasks like text and image analysis. Code examples are in Julia.


And one more gold standard for more practical data science (with R) stuff. It introduces a more modern take on R (chaining functions together with the %>% pipe operator, which makes everything clean and terse).


We're aiming to do this at Dataquest (where I work). Essentially create a complete learning path from 0 to job ready.

Data Scientist path - https://www.dataquest.io/path/data-scientist

Some of the projects peppered throughout the track: https://www.dataquest.io/projects

EDIT: Forgot to mention, almost every lesson uses a real dataset!

https://open.gl is the best OpenGL tutorial IMO.

Well, i was waiting for some starting graphics programming guide.

Raytracing and Rasterization sounds so cool :D

A smart idea, i like it

Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact