Hacker News new | comments | show | ask | jobs | submit login
Ask HN: What are the best books on modern computer graphics?
294 points by BigJono on June 28, 2017 | hide | past | web | favorite | 76 comments
Googling around, I can find many examples of this question, but they are all dated before the move to Vulkan/DX12, which I've heard has represented a big paradigm shift.

Are the old recommendations still relevant? Or are there newer books which are more relevant to someone looking to get into the field today?




I would not advise anyone to try to get into computer graphics directly with the Vulkan or DX12 APIs. Start with DX11, modern OpenGL and/or WebGL, and work through your computer graphics theory (and a lot of practice) using those. Geometry, illumination, shaders, tools and GPU computation will take a lot of time to master. When you decide you want to go into low level APIs, if you are comfortable with Apple systems, Metal will likely be easier than Vulkan/DX12.

A lot of the theory from classic books is still valid, so even a copy of Computer Graphics: Principles and Practice will look fine on your desk. "Real Time Rendering" by Moller/Haines and "Physically Based Rendering" by Pharr are excellent. "Game Engine Architecture" by Gregory and "Mathematics for 3D Game Programming and Computer Graphics" by Lengyel will probably prove very useful and relevant to you as well.

Online resources, the free https://learnopengl.com/ and the $10 http://graphicscodex.com/ are both fantastic. Make sure to read https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-..., and also study the techniques used for the craziest entries in https://www.shadertoy.com/

And then, lots and lots of papers and presentations from the past 5 or 10 years of Game Developers Cconference and Siggraph.


This is an unpopular opinion, but nobody knows how to make anything look real. The quicker you accept and internalize this, the easier it is to dive into graphics programming. It reveals graphics programming for what it is: A bag of tricks that mostly look good in certain constrained circumstances.

When I first started out, I chased book after book looking for knowledge. "Someone must know how to make things look real, right? Everyone is talking so authoritatively on the topic." No, no one knows. There's no source of knowledge that represents the cutting edge of the field, because the cutting edge is whatever happens to look pretty good today. And that's mostly thanks to very good art, not very good techniques.

Just dive in and start doing geometric puzzles. Look at it like a game, not like a quest for knowledge. If you have fun with it you'll go further than any book will take you.


I love the motivation and message of this comment - to not worry and just dive in. But a few people most definitely do know how to make some very realistic graphics. So much so, that I guarantee you've seen some in movies that you didn't know was fake. I used to make CG movies, and I can't spot the best CG anymore.

The trend in film rendering is, across the board, graduating from "tricks" that work to physically based processes, to whatever degree it is feasible. The setups are getting simpler, not more complex. The surface modeling, material modeling, color processing, lighting and rendering have all moved by leaps and bounds in last 10 years. It is is the process of shifting from art to physics, and the job of CG lighting technicians is becoming closer to the job of stage lighting technicians, because the lighting and rendering is physically based now.

It's possible there are developments you don't know about. It's also possible you're suffering from evidence of Sturgeon's law and not seeing the top 10% clearly; there is plenty of crappy CG that doesn't look real, the vast majority of it doesn't look real. But the best of it is realistic and getting better every single year, and the people studying it do know some ways (and are currently adding more) to improve the realism when the computational power arrives.


As I mentioned below, the only technique that produces effective results is if you mix actual, real footage with CG. It's true that people can't spot the CG in that situation, but that's different from the discussion of how to create fully-simulated realistic video.

I'll give you a simple way to defeat any question of whether we know how to make something look real: What does it mean to multiply two colors?

If you chase down the logic, the true answer is "It's meaningless. It just happens to be an approximation that looks good in most cases." But it has nothing to do with how light works in real life. Yet every engine multiplies colors because the alternative is too computationally expensive -- and it still wouldn't produce realistic results because we don't source our art from real life. Artists typically control the content, and any art-driven pipeline is doomed to look pretty good but not real.


> the only technique that produces effective results is if you mix actual, real footage with CG

This isn't true.

> I'll give you a simple way to defeat any question of whether we know how to make something look real: What does it mean to multiply two colors? If you chase down the logic, the true answer is "It's meaningless."

It's not meaningless at all, it's a close approximation of light absorption that happens to be measurably and perceptually indistinguishable from the physical spectral absorption process, save for a few corner cases that are being worked on. If the result is 99.9% accurate physically, and 100% perceptually indistinguishable to humans, then it's a valid predictive physical model. It has everything to do with the result of what light does in real life, and if it didn't, we wouldn't be using it to approximate light.

Your thinking here seems to back up my suggestion that you might have missed out on some of the recent developments. Subsurface scattering, for example, is modeling absorption more phyically, and replacing the simple multiplication with a simulation process, because we now have the computational power to do so.

Your argument is attacking the non-simulation aspects of rendering without addressing whether a simulation that is simpler than real life is acceptable. If I can't tell the difference, does it count? Because I'm 100% certain that color multiplication is not the problem when it comes to CG not looking real.


It's not meaningless at all, it's a close approximation of light absorption that happens to be measurably and perceptually indistinguishable from the physical spectral absorption process, save for a few corner cases that are being worked on.

It's amusing that you brush it off with "Oh, there are a few corner cases." Those corner cases are why it doesn't look real.

And no, it's not 99.9% accurate. You may be thinking of constrained scenes, where e.g. you shine a laser on a substance of a specific color and then measure the resultant color combination. But the complexity of real life defies such analysis.

If you're going to say I've missed some recent work, you'll need to cite sources. Then we can debate those.

EDIT: To clarify:

Your argument is attacking the non-simulation aspects of rendering without addressing whether a simulation that is simpler than real life is acceptable. If I can't tell the difference, does it count?

My argument is that if you get a bunch of people together, show them simulated video and real video, and ask "Which of these are simulated?" they will correctly identify the simulated video as "not real" with significant accuracy -- given modern techniques, probably >95% accuracy. The simulation needs to be of a non-trivial scene, like a waterfall or a valley. When you show real video side-by-side with simulated techniques, there's no contest.

If we truly knew how to make simulated video that looks real, without mixing any real-life footage, then the observers in the above scenario wouldn't be able to do any better than random chance. But they can, because we can't.


I did cite a source: subsurface scattering is an example of simulating light absorption. It is an example of things that you just claimed aren't getting better actually getting better.

You're attacking my arguments and getting more hyperbolic without any examples. What specifically doesn't look real? What are you actually claiming? What is your criteria for whether something is "real"? What corner cases are you thinking of that cause color multiplication to break down so frequently that it's a bad approximation most of the time? Can you give some examples of state of the art CG that intended to improve realism but doesn't look real?

I'm not claiming that everything looks real, nor that all CG is realistic. I'm claiming that CG is getter better over time, and that some things are already indistinguishable from real. The number of CG things that look realistic is going up over time, and it used to be 0. There is a trend here that reality contradicts your original basic thesis that nobody can rendering something realistic.

My 99.9% number wasn't a claim, it was a made up number (which I thought was obvious, sorry). I said "If the result is 99.9% accurate... then it's a valid model" to back a point: the point is that if multiplication is predictive then it's a valid model. That's how all of physics works. Acceleration under gravity is an approximation.

You haven't demonstrated that multiplying doesn't work, you've only stated an opinion. I'd like to see some examples of what you mean, because it appears to work very well from where I'm sitting. The colors of grass, bricks, wood -- diffuse materials -- is very closely approximated by multiplication, enough that we can in fact measure how good the approximation is, and humans cannot tell the difference. Therein lies the problem with your argument -- if I can't tell the difference, that is my definition of realistic. It doesn't matter what happened under the hood. You seem to be claiming that only reality is good enough to be realistic, because anything else is cutting corners.

I'm not sure I understand what you mean about real life defying BRDF measurement. One of the ways that CG is getting more realistic is precisely through various gonioreflectometers, some of which shine lasers and measure the output from all angles. Material catalogs are currently being constructed and sold to CG companies using higher and higher resolution measurements of exactly what you're claiming isn't possible and doesn't help. People buy them because they improve realism.

> My argument is that if you get a bunch of people together, show them simulated video and real video, and ask "Which of these are simulated?" they will correctly identify the simulated video as "not real" with significant accuracy -- given modern techniques, probably >95% accuracy.

Every year Autodesk runs the test "Fake or Foto". http://area.autodesk.com/fakeorfoto Less than 10% of people are getting them all right this year, and a considerable number of people are under the 50% line. This isn't scientific, of course, but see if you can score 100%. This is an indicator that CG is pretty good. Will you admit it if you don't score 100%?

Earlier you made an argument that stills are looking okay, but moving things aren't. The problem with that argument vs color multiplication is that color multiplication is used on stills, so if that's what's breaking down, stills should be obviously unrealistic.


It's very difficult to figure out what the core of your argument is. We can't create simulated video indistinguishable from real life, and I've given you an experiment that will prove that we can't.

You haven't demonstrated that multiplying doesn't work, you've only stated an opinion. I'd like to see some examples of what you mean, because it appears to work very well from where I'm sitting.

It's not an opinion that multiplying colors has nothing to do with how light behaves in real life. I even said that it was an approximation that works fairly well, so if you're going to simply ignore the things that I did say, this discussion isn't going anywhere productive. The point is that it's an approximation, and it's partly why we subconsciously recognize simulated video as fake.

Every year Autodesk runs the test "Fake or Foto"

Obviously, photos don't work. It doesn't pass the realism test. This conversation is about video -- the human visual system processes video completely differently. It's not just a matter of taking still frames and stringing them together. The test is invalid. If you use video (of non-trivial length, with non-trivial scene complexity -- any nature video will do fine), you'll see the participants' accuracy skyrocket to nearly 100% correctly identifying simulated video.

If you're truly curious about the reasons why a simulated video looks fake, look into some books about the neuroscience of visual processing and color perception. One of the fundamental tenets is that colors affect colors around them. To make something that looks real, you need to get the colors exactly right. Even a small departure from reality will ruin the entire effect. That's partly why multiplying colors is problematic, since it results in a departure from real life behavior. The other half of this is to ignore any test involving still frames. We don't perceive still frames the same way as video -- it's why video compression is different, for example -- so we can't use stills in any test of realism.

Whenever someone points out that we really don't have a clue how to make simulated video indistinguishable from real life, someone comes out of the woodwork to point out all the reasons why it's right around the corner. That's been false for a decade, and it's not looking any better for the upcoming decade. It's easy to prove me wrong: Get a bunch of simulated videos together and show them to observers, mixed with real videos. They'll spot the real videos every time, if you don't use constrained or simplified scenes. Nature videos work well.

It seems like people just don't like the idea that graphics programming is a bag of tricks. They want it to be deeper. But you can throw in all the physically-based techniques you want, and the resulting video still won't look real.

I have to go to an appointment now, but maybe we can continue this in a few hours if you want.


> It's very difficult to figure out what the core of your argument is. This conversation is about video ...

Ah, I see the problem. You're right. I thought I was debating the idea that "nobody knows how to make anything look real.. no one knows." I just checked, and your first post didn't say anything about video. Your second one mentioned it in passing, but I didn't realize it was a constraint on what I could talk about. I see why I'm confused, and why I'm confusing you. I'm sorry! Honestly. I am indeed thinking of some other things besides 100% fully simulated video of nature that is unconstrained, when I try to make the claim that some people do know how to render some things realistically.

Here's a pretty good CG video, in my opinion. Which parts look fake to you at a glance? http://vimeo.com/15630517

> You appear to be offended by the idea that we can't create simulated video indistinguishable from real life. But we can't, and I've given you an experiment that will prove that we can't.

That's a negative. Personally, I don't think I can't prove a negative, with any experiment. Are you sure it's provable?

Here's the core of my argument, the part that I thought I was debating. I think realism (undetectable to people) has been achieved with: material samples, constrained physics simulations, stills images of architectural scenes, limited still images of natural scenes, elements in video (mixing live and CG footage), fully CG video environments for short periods of time, humans & faces but only in fairly constrained situations for short periods of time. I don't think realistic humans have been achieved in general. I do think realistic simulated video - that meets your criteria - will happen eventually, and I don't know when or claim anything about when.

> Obviously, photos don't work.

But you can demonstrate some color multiplication problems in fake photos, right? You're ruling out still images yet the only problem you've cited is one that affects every single pixel of all still CG images.

> Get a bunch of simulated videos together and show them to observers, mixed with real videos. They'll spot the real videos every time, if you don't use constrained or simplified scenes. Nature videos work well.

Okay, fair enough. I don't know what "constrained or simplified" means. Your goal posts could be anywhere, so I definitely can't win. I don't think this is easy though -- the best CG is very expensive still, making something that looks very realistic is difficult. I could agree here and now that no CG ever rendered yet passes the unconstrained environment and complexity test when it comes to realism, and I would agree that realism is easier to achieve the more constrained and simplified the scene is. My argument is that the threshold for where too complex triggers unrealism is moving in the direction of more complex over time.

> It seems like people just don't like the idea that graphics programming is a bag of tricks. They want it to be deeper. But you can throw in all the physically-based techniques you want, and the resulting video still won't look real.

Now I'm getting really confused. Graphics is a bag of tricks, I don't have a problem with saying that, so I don't know which people you're talking about. Those of us practicing graphics have been saying that all along.

But you're saying that it can never happen? Using all the physically based techniques now existing and ever to be invented, it will never happen? I could simulate reality, and I won't ever get there, no simulation will?

I can see that you've thought about this a lot, and I can see that you know a lot about graphics. I honestly thought you were saying we're not physically based enough yet, and I was trying to show how we're getting there, but now I'm not sure I understand what your claim is, or what we're talking about. I do suspect we're getting down to what my friends call the "dictionary problem" - agreement that is accidentally violent due to miscommunication over a few words.


You can easily blow his claim out of the water by showing a video that is fully computer rendered but looks real.

Parts of the video you offered look very realistic, but the content breaks the sense of realism, e.g., vegetables shattering into tiny pieces, rocks tumbling upwards, etc.


what's not realistic with the lemons falling sequence?

i link to some examples in other posts below too.


Yeah, I agree. It's extremely realistic.


"all models are wrong, but some are useful"

i read this discussion as sillysaurus3 arguing the wrongness of ALL models, while not acknowledging dahart's examples of utility: constrained/simplified for some is good enough for others.


> It's not meaningless at all, it's a close approximation of light absorption that happens to be measurably and perceptually indistinguishable from the physical spectral absorption process, save for a few corner cases that are being worked on. If the result is 99.9% accurate physically, and 100% perceptually indistinguishable to humans, then it's a valid predictive physical model.

> Subsurface scattering, for example, is modeling absorption more phyically, and replacing the simple multiplication with a simulation process, because we now have the computational power to do so.

Surely everyone has to accept that whatever we do here is always going to be an approximation to arrive at a similar looking result to reality that won't be following what's really going on to a low level? We're not going to be able to model individual photons, atoms, electrons, quantum effects etc. so it's always going to be an approximation/abstraction.

The goal should be whether it looks indistinguishable not if it is indistinguishable.


Yes absolutely - If the metric for realism isn't perceptually indistinguishability, then we might need to rethink what the word "realism" means. I don't think anyone here is arguing against that though. I think @sillysaurus3 is saying that the approximations we make in our simulations will always have large errors that cause visible artifacts, that the unrealistic CG will always be perceptually distinguishable from real video.

Modeling individual photons is happening in research rendering code, just not on the same scale as reality. Path tracing renderers using spectral colors & modern shading models are getting there. We might use a few million photons at a time to render a room instead of the quadrillions that reality uses. People are thinking about modeling individual atomic interactions, electrons, and quantum effects. There might even be experimental renderers already that do this, I wouldn't be the least bit surprised. If not, it'll happen pretty soon.

BTW, there is no super distinct line when you're talking about the simulation of atomic interactions and electromagnetism, even for the 100% full simulation of all subatomic particles individually. I'm not sure that's possible yet, but I am pretty sure its unnecessary for "realism" in the context of film making.

There are shading models that account for electromagnetic & atomic effects. Physically based shaders obey Helmholtz reciprocity and energy conservation, among other things. Some of them account for multiple atomic level reflections. This is a statistical way of accounting for atomic effects, just like color multiplication is a statistical way of accounting for spectral absorption. New shading models every year are accounting for smaller and smaller margins of error from the previous approximations. Check out this crazy shading model from 25 years ago that is based on electromagnetism theory and accounts for atomic level shadowing & masking: http://www.graphics.cornell.edu/pubs/1991/HTSG91.pdf I think this paper still wins the award for most equations in a Siggraph paper. It didn't win any realism awards though. :P


> What does it mean to multiply two colors?

I wouldn't say that in a material model you're actually multiplying colors together. It just happens that when you record the amount of reflected low-, medium-, and high-frequency light under a spectrally flat light source, you can display these data as light and and perceive a color.

With fancier equipment, you could record the full reflectance spectrum from every angle -- and polarization, too, why not -- and be able to predict what the (simple) material would look like under every light source. Maybe you're worrying about fluorescence, but then just record the data under varying monochromatic light sources.

If you're just talking about how RGB is spectrally anemic, sure, that's why lighting your house with only R, G, and B LEDs makes everything look so strange.

Spectral multiplication, as I understand it, is just a consequence of the law that if you increase the intensity of received radiation by a certain factor, then the intensity of the reflected radiation will increase by the same factor. Is there some important nonlinearity I'm missing? Where is the meaninglessness?


Why not just win the argument by linking to a (youtube)video demonstrating photo realistic ray tracing? :) I would if I could, but I haven't found any raytraced animation that is indistinguishable from recorded video.


Sadly, the best CG is paid for and expensive, thus not available on YouTube. I also don't know where the best CG in recent movies is, because I can't tell it's CG, and they don't always talk about it.

You can find lots of articles like "10 Scenes You Didn't Know Were CGI" https://www.youtube.com/watch?v=61ETzC1UbM4 I doubt this meets the criteria or will satisfy @sillysaurus3, they're almost all a mix of real footage & CG.

http://area.autodesk.com/fakeorfoto is one way to see if you can identify stills with 100% accuracy. I didn't.


Sorry, I'm not convinced. :) The video talks about editing footage to add or remove details using cgi. The images are a better example, but those are all comparing cg vs heavily retouched images.


pretty fun, I scored 56%, first glance impressions.


i wouldn't be able to tell that the chair in the first few seconds of this isn't real: https://www.youtube.com/watch?v=4nKb9hRYbPA

on the blue ray dvd of pixar's finding dory, closely examine the sand and water (ignore the living creatures) in the first few minutes of the short film piper. you know a-priori it's not real, but i'd like to hear people describe what they think looks off with the environments (not the creatures). try to normalize that with the intent of a director trying to craft an identical ambiance from a recorded video (color temperatures may be made warmer in post, increased saturation, more vignetting, etc).

in big hollywood vfx house showreels, there's a lot that's obviously cg (again, we're still off on creatures), but what about the environments within? https://www.youtube.com/watch?v=pnfdpgDDLc8 https://www.youtube.com/watch?v=SabPuZ5coKk https://www.youtube.com/watch?v=_i0XuA9KLEo

in particular, i don't think people watching that night-time sequence of deepwater horizon in theaters would identify it as entirely computer generated.


From my experience this is very much true of the current state of real-time rendering (and there's nothing really wrong with that, because it's just not possible to run anything resembling physically accurate algorithms in real-time). However, in the world of offline rendering and raytracing, there has been considerable work on doing physically accurate rendering. Of course a lot of approximations are still made due to memory/CPU constraints, but it is a different world. "Physically Based Rendering" by Pharr and Humphreys is a good intro to this way of doing things.


If that were true, we'd see the evidence in Hollywood. But nobody knows how to make a realistic fully-simulated video. The reason everyone believes that it's just around the corner is because those academics talk with authority on the topic, and the still frames look pretty convincing. But still frames are completely different from video -- the human visual system processes video differently.

I've written about this before:

https://news.ycombinator.com/item?id=13892671

https://news.ycombinator.com/item?id=12461883

https://news.ycombinator.com/item?id=13893305

The only technique that we know produces realistic video is if you mix actual, real footage with simulated content. That's very effective, but it's unsatisfying for obvious reasons. I think it hints at a way toward fully simulated realistic video, though.


To me the most interesting thing is not just a fully-simulated video but a fully-simulated interactive scene using VR or AR, and that is obviously an even bigger challenge. I don't personally think that either of these objectives are even close to being just around the corner, but I do think we are moving toward them. I have no idea how many additional orders of magnitude in computing power would be necessary to create a convincing simulation. The journey in that direction is a fun challenge though, right?

I did just read your previous posts, and it sounds like you have a pretty interesting history of working on all of this stuff.


I can't judge the popularity aspect of it but "nobody knows how to make anything look real." assumes obviously that the goal is to make it "look real".

There are so many interesting aspects of modern computer graphics that have nothing to do with rendering realism. My favorite is visualization, or using graphics to create a visualization of a process that is not visible normally like fluid dynamics. But there are lots of others.

I think that if you start from the most featured APIs you end up jumping in with a hugely steep learning curve that really hinders learning. Starting from the basics, very simple OpenGL or WebGL rendering, learning the algebra behind projections so you can think about what you want to see, and then the notions of texturing and shading, all help get from not knowing anything to knowing something much more efficiently.


"This is an unpopular opinion, but nobody knows how to make anything look real."

Unpopular? By whom? Computer graphics is a crafty bag of tricks sprinkled on top of very concrete linear algebra and computational geometry.

That said, it's actually hard to create appealing images just in plain old photography, so anyone thinking computer graphics would facilitate any artistic aspects is in for disappointment.

The math needs to work, but you need artistic talents to make anything look good/real.


> A bag of tricks that mostly look good in certain constrained circumstances.

Yeah. Computer graphics is all smoke and mirrors... except smoke and mirrors, because those are difficult :-)


What looks good, what looks real, is actually an active area of psychophysics research in high-dynamic range image capture and rendering.

The real world is high dynamic range, but human vision is much more limited, but also spacially variant and adaptive. So we get multiple optimized "snap shots" of a scene rendered in real time and mapped into what our brain considers one scene. Look toward high key high brightness area of a scene, retina gain function clamps down the iris closes and you get properly rendered details in those areas, look toward a tree shaded bench at a rock under that bench and iris opens, gain function increases and again the shadow details are rendered. Cameras are just starting to do this.

But that's capture. It's still HDR. Now you have to re-render all of that if you're going to reproduce it on a lower dynamic range device like a computer display. There aren't that many HDR screens on the market yet but those are also coming, and it means we don't have to do that next stage re-rendering which is a lot more subjective and still actively researched why some renderings look real and others look cartoonish.

And the same would be true for computer generated graphics rather than captures.


In a lot of ways, all that old software rasterizing theory is even more relevant now, because shader languages really don't give you very much. They can convert polygons into screen fragments, but it's up to you to figure out projection, mapping, lighting, and everything else.

It makes advanced techniques infinitely easier. There is stuff you just couldn't do on the fixed function pipeline without a vendor-specific extension. But it makes for a huge learning curve for simple stuff.

For that reason, I general suggest people start with a game engine like Unity or PlayCanvas, or if they are already experienced programmers, a scene graph like Three.js.



You're the creator of that website, right? Thank you very much for it, it's great, helped me a lot.


Just skimmed one of the articles there, it looks like a really good site.


> When you decide you want to go into low level APIs, if you are comfortable with Apple systems, Metal will likely be easier than Vulkan/DX12.

Why would you this though? You'll restrict yourself to a comparatively tiny market (10% PC market share, 20% smartphone market share), and your low level knowledge will be somewhat nontransferable.


Most concepts in Metal build on concepts learned from GL/DX11 and carry on to Vulkan/DX12.

This is better than starting directly in Vulkan/DX12, quickly getting overwhelmed and giving up. Plus it makes the step from GL/DX11 easier.

I bought an iPad last month just so I can play with Metal on an actual iOS device at home :)

I don't think hobbyists care about market share that much; they want to get hands-on and learn, and Metal is a rather fun API to work with.


As far as mobile goes, iOS has some compelling advantages. In particular, you don't have the headache of supporting all the different Android devices. Plus, despite the difference in raw numbers of devices, iOS offerings often bring in as much or more money than the Android version.


>> even a copy of Computer Graphics: Principles and Practice will look fine on your desk

My copy of CGPnP is from 1993. Math doesn't get old, we just build more on top of it ;-)


If you're interested in something that's not strictly real-time a good book is physically based rendering http://www.pbrt.org/ . It uses a literate-programming style which is neat. It's about path-tracing which is similar to ray-tracing but can give an unbiased approximation to the rendering equation (which is a reasonably accurate model of "everyday" optics.) Here is a JS+WebGL (mostly WebGL ;) ) interactive path-tracer: http://madebyevan.com/webgl-path-tracing/

---

Vulkan is pretty boring to be honest. I'm very doubtful that you want to bother with it if you're just starting. Using these APIs is like filling out tax forms, but Vulkan is a lot more effort than OpenGL (but, OpenGL is weird...) Learning shaders in OpenGL will be pretty transferable knowledge. Vulkan uses a lower-level assembly-like language, SPIR-V that doesn't pretend to be C (like GLSL) but there are GLSL to SPIR-V compilers. I can't comment on DX (I haven't used it.)

Here's a tutorial on drawing you first triangle: https://software.intel.com/en-us/articles/api-without-secret... . Note that there is a lottttt of code to do this (this is part 3...) I really think you could learn more about modern graphics with glBegin(GL_TRIANGLES) and writing GLSL shaders than boring yourself with Vulkan.

If you do want to play with Vulkan there is this book (piggy-backing off the rep from the old "OpenGL Bible") https://www.amazon.com/Vulkan-Programming-Guide-Official-Lea... . I can't give it an honest review because I got bored; I plan to pick it up again later...


Yeah, this is one of the downsides of modern graphics.

Taking for example OpenGL, in the old days you could just download a NeHe project to use as a framework to learn from. To render basics you just specified triangles by their vertices with e.g. glBegin(GL_TRIANGLES) as Jacob says. In a small number of days you could have been close to the front tier of graphics development.

But then it became far more complicated as graphics cards added tons of optimisations to improve speed. Instead of just specifying triangles you then had to create lists, indirect lists, bindings etc. Now as per his example, you have to add a ton of extra boilerplate to just render a simple triangle. And if it doesn't just work it's very hard to figure out why.

What you can also do is use an engine like Unity, then just set it up so that you add some direct OpenGL calls in to create and render objects (I use it in this manner for some dynamics things, when I don't want to create GameObjects for Unity to manage). But that's not a very "pure" way of learning graphics development. I guess it also comes down to what you want to do with the knowledge in the end.


> you have to add a ton of extra boilerplate to just render a simple triangle

It's true the modern OpenGL pipeline is a bit more boilerplatey.

But that small amount of extra code means it's vastly more flexible. Rather than programming stuff into a fixed pipeline that can only do a limited set of predetermined things, everything is centred around shaders you write, and they can do anything you want. The boilerplate is just needed to compile the shaders, and feed information to them.

Also, while it's slightly more code you're writing, the amount of code executed on the CPU is less, because mostly you just set up a draw call and let the GPU do the rest. Much more efficient.

And the boilerplate is small, if a little intimidating. This stars effect is 156 lines of JS code (of which only about half is actual WebGL stuff), and 24 lines of GLSL shader code: https://hoshi.noyu.me/

(I should add that I used to feel the same way and really didn't get modern OpenGL, but after a few abortive attempts I finally grokked the basic concepts, and once you have those, it's plain sailing, because most things are just small changes to shaders.)


It reminds me of the progression that I've gone through mentally, regarding 3D graphics.

1. What's all this matrix crap? I can take object and camera coords and calculate out a screen coordinate easily with some simple algebra and trig.

2. Oh, OK. Use matrices properly, and you can get rid of some repeated operations.

3. Classic GL mostly takes care of the matrix operations for me, and leaves me with a simple-ish interface for modifying the state machine (goodness, there's a lot of data to feed into it though).

4. Modern OpenGL: Erghh, boilerplate!

5. Build some utility functions and bootstrap yourself out of there, and the interface is so much more powerful. Sure, you have to explicitly specify some more things. Another way to put that is that you can change things that used to be implicitly defined. It's a wonderful bit of freedom, actually.


I feel your pain, learned with NeHe and then went on to GLES/Android graphics...just to learn that everything is different. Now though, I see why: while the "old" way was to talk to the statemachine with loads of tiny CPU instructions, you now load you data to the GPU in one go, which costs you less cpu if done right, and is generally close to what you actually want in regards to loading models/etc..


In some ways, I find the modern OpenGL with shaders to be simpler than the fixed-function pipelines of old. The important parts, like the projection math, happen in your code. If you understand the math, it's easier to follow than a bunch of API calls to modify global matrix stacks.


You also get the flexibility of how you want to implement certain matrix math, eg. if it's calculated on the CPU or GPU depending on your needs.


It's still like that. Ok, you aren't going to know all of the advanced graphics techniques, but you can render a scene with multiple cubes and a first-person camera in 100 lines of code.

Rendering a triangle to the screen is easy; it takes understanding the unique way these tools work, but at the end of the day it's not complicated stuff.


That's a bold enough statement that I wonder if you have a reply to the OP's question.


You got me.

This is just from fiddling around in my free time and reading a lot of tutorials, but here's a WebGL renderer that will render a 40x40x40 block of semi-randomized cubes. You can clone that and open index.html in a browser. WASD work about as you expect and spacebar locks the mouse.

regl is cool.

https://github.com/aaron-lebo/dog

OpenGL works pretty much the same in very language and so much legwork ends up through matrix math. Once that starts to click that's the heart of a lot of graphics tech, even if it's not state of the art.


Thanks, I'll take a look.


If you have any interest in WebGL (1), I've been publishing screencasts on YouTube for a while now- I'm up to nearly 100.

The playlist for WebGL is https://www.youtube.com/playlist?list=PLPqKsyEGhUnaOdIFLKvdk...

I'm also covering 3D Math fundamentals now.

My channel https://www.youtube.com/user/iamdavidwparker


Nice! Subscribed


- As a starter Realtime Rendering [0]

- The siggraph courses on shading [1]. The 2017 course will be on 30th of July [2]. Current techniques from AAA engines.

- Papers from JCGT [3].

- Plus the paper collection from Ke-Sen Huang linking all graphics related conferences [4]

have fun reading

[0] http://www.realtimerendering.com/book.html

[1] http://blog.selfshadow.com/publications/s2016-shading-course

[2] http://blog.selfshadow.com/publications/s2017-shading-course...

[3] http://jcgt.org/read.html?reload=1

[4] http://kesen.realtimerendering.com/


I can't recommend Scratchapixel enough for diving deep into the concepts behind CG (although it seems to be down for me at the moment ironically). I can't remember how much it goes into libraries or if it sticks to implementing things from scratch, but I find knowing the concepts behind something makes learning the libraries much easier anyway.

https://www.scratchapixel.com


Fabien Sanglard's bookshelf has all you need: http://fabiensanglard.net/Computer_Graphics_Principles_and_P...


We made a mind map for learning computer graphics :

https://learn-anything.xyz/computer-graphics

The basics node has the best resources for learning the subject.


Whilst not realtime rendering - Physically Based Rendering (Pharr, Jakob & Humphreys) http://www.pbrt.org/ is great for understanding what people are trying to achieve, without being distracted by very detailed optimisation etc.


What many people fail to realize is that the older and fixed pipeline legacy API's in modern graphics cards are emulated with thin layers directly on top of the modern stacks. On today's iOS devices, OpenGL ES 1.0 is emulated with the OpenGL ES 3.0 API, which very likely in turn is emulated with the Metal API.

If there was a genuine interest in helping the transition to the newer API's, the different parties writing and implementing today's API's would publicly make available the code for emulation layers to the older API's.


That wouldn't apply to Apple though. They don't care about interest, otherwise they would have supported modern OpenGL and Vulkan on macOS.


@BigJono - it would help to hear a little more about your overall idea or goal of what you might want to do. Like are you thinking maybe game engine programming? Mentioning Vulkan/DX12 implies you might want to get into real time engine/shader programming, but that's only a small slice of "modern computer graphics".

If you're interested in computer graphics in general, and googling "modern computer graphics", then the Vulkan/DX12 APIs aren't super important, the fundamentals of CG have not changed at all. The paradigm shift with those APIs is centered on performance, not on new concepts. You can learn vast amounts of computer graphics by writing a ray tracer or animation program or using off the shelf renderers, and never touch Vulkan or DX12.

There are definitely lots of great suggestions here, but it's a wide variety, because it all depends on what you envision or hope to do. It doesn't have to be fully formed or thought out, but if you had some inking like 'hey I saw this awesome procedural animation on Vimeo and I want to learn how to do that' or 'I'd love to work for Valve someday... what steps do I have to start taking?' or 'I was thinking I should add some 3d to my website' or 'I want to be a film animator', if we had a little more insight on what you're hoping, we can definitely get answers that will be more focused and helpful.


My main goal is to just learn something really well in my spare time for fun. I'm not too fussed about actually building anything, so I don't mind materials that are more theoretical than practical.

I also have a secondary goal, which is to move to another industry, as I'm starting to get a bit disillusioned with front-end web development. That's in the future though, and CG would be only one of many different fields I'd be considering.


I sort of know this area.... There two separate paths -- realtime or non-real-time.

For non-real-time there is: http://www.pbrt.org/

For real-time there is this: http://www.realtimerendering.com/

Neither book gets into the details of the specific API, they are theoretical.

Real-time techniques change nearly complete about every 10 years because GPUs get faster and render more techniques possible and obsolete the older less good looking techniques.


It's not a book, but if you poke around the Unreal Engine source code you can learn quite a bit:

https://github.com/EpicGames/UnrealEngine/tree/release/Engin...

You need to join the org first, https://github.com/EpicGames/Signup


IMO you should step back and ask: what is your end goal?

1) To make games (commercial games) or other realtime graphics application?

2) To make 3D rendering software or other semi-non-realtime software?

3) To have fun / learn?

4) Etc.

If you just want to learn/have fun, my personal recommendation is to write your own software, from the ground up. Ignore DirectX and OpenGL (unless you need them for a context to get pixels on the screen). Use a fast, native language like C/C++.

Teaching yourself and exploring is 100x more fun than reading a book, IMO (at the same time, you may learn faster using a book).

Try working with a naive projection algorithm to get 3d points on the screen. Like y' = y +/- z (Zelda-esque bird's eye view). When you are comfortable with matrices and vectors, learn "real" projection algorithms.


I am picking up on computer graphics again now, too, after dabbling a bit in it well over 18 years ago (and for a small iOS game 8 years ago). I am restricting myself to Apples platform currently, this might not be what you want, but maybe others prefer it. They have great SceneKit WWDC videos dating back all the way to 2012. I recommend watching them from 2012 to 2017 in historical order. Also I am learning Blender, a free 3D authoring tool. I can recommend the free wiki book at https://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro .



I would recommend starting with things like touch designer and/or houdini. Jumping into the programming side right away can be done, but understanding things visually should give a much better foundation. From there you can write shaders in real time in touch designer or put them together with nodes in houdini. Simple expressions and small python fragments can help understand what certain manipulations look like. From there the programming side will make a lot more sense.


Slightly off-topic, I am looking for resources about geometry applied to computer graphics (because I have forgotten everything I learned at school). Any advice ?



Awesome resource on differential geometry applied to graphics: https://www.cs.cmu.edu/~kmcrane/Projects/DDG/paper.pdf


Any introductory linear algebra course will give you everything you'll need.


Does anyone have comments on Ray Tracing Minibooks -- Ray Tracing in One Weekend, Ray Tracing: the Next Week and Ray Tracing: The Rest of Your Life?

https://www.amazon.com/Ray-Tracing-Weekend-Minibooks-Book-eb...


I just finished the first and I made it through in 2 days without really trying. I went through it a second time, trying to use the example code as less of a crutch and do it myself, while following different topics that came up.

It's a great overview of the subject. I /really/ like that each chapter is fairly short, you have a working example at the end of each one, he explains both the concept and why he approaches a certain way in code.

I should also mention, learning raytracing won't directly help if you're trying to learn realtime computer graphics.


Real Time Rendering is particularly good at explaining the underlying concepts of modern GPU rendering pipelines.


I'm actually trying to learn compute shaders specifically. I've followed all the tuts and guides I could find and have made some neat things, but I still want more depth than I can just find through Google. If anyone has any recs for further study I'd really appreciate it :)


I understand that Vulkan is more efficient for the CPU, not the GPU. However, because it's low-level, giving more control over the hardware, it can enable new techniques that weren't possible before.

Do any of these new Vulkan (or Metal or DX12) techniques actually exist yet?


There was a book written by a US college CS professor that I saw a while ago online. I think it was free to read online, maybe other versions were paid. Unfortunately don't remember the professor or book name now. It covered OGL and HTML Canvas. I remember thinking it looked good. Maybe someone else here can say what the book name is.

Edit: Maybe covered WebGL too.


To be honest, the books in GFX programming are usually about old consolidated methods. The new stuff is - everywhere, in blogs, in papers, in shader-toys.


What do you mean by consolidated in this context?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: