Are the old recommendations still relevant? Or are there newer books which are more relevant to someone looking to get into the field today?
A lot of the theory from classic books is still valid, so even a copy of Computer Graphics: Principles and Practice will look fine on your desk. "Real Time Rendering" by Moller/Haines and "Physically Based Rendering" by Pharr are excellent. "Game Engine Architecture" by Gregory and "Mathematics for 3D Game Programming and Computer Graphics" by Lengyel will probably prove very useful and relevant to you as well.
Online resources, the free https://learnopengl.com/ and the $10 http://graphicscodex.com/ are both fantastic. Make sure to read https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-..., and also study the techniques used for the craziest entries in https://www.shadertoy.com/
And then, lots and lots of papers and presentations from the past 5 or 10 years of Game Developers Cconference and Siggraph.
When I first started out, I chased book after book looking for knowledge. "Someone must know how to make things look real, right? Everyone is talking so authoritatively on the topic." No, no one knows. There's no source of knowledge that represents the cutting edge of the field, because the cutting edge is whatever happens to look pretty good today. And that's mostly thanks to very good art, not very good techniques.
Just dive in and start doing geometric puzzles. Look at it like a game, not like a quest for knowledge. If you have fun with it you'll go further than any book will take you.
The trend in film rendering is, across the board, graduating from "tricks" that work to physically based processes, to whatever degree it is feasible. The setups are getting simpler, not more complex. The surface modeling, material modeling, color processing, lighting and rendering have all moved by leaps and bounds in last 10 years. It is is the process of shifting from art to physics, and the job of CG lighting technicians is becoming closer to the job of stage lighting technicians, because the lighting and rendering is physically based now.
It's possible there are developments you don't know about. It's also possible you're suffering from evidence of Sturgeon's law and not seeing the top 10% clearly; there is plenty of crappy CG that doesn't look real, the vast majority of it doesn't look real. But the best of it is realistic and getting better every single year, and the people studying it do know some ways (and are currently adding more) to improve the realism when the computational power arrives.
I'll give you a simple way to defeat any question of whether we know how to make something look real: What does it mean to multiply two colors?
If you chase down the logic, the true answer is "It's meaningless. It just happens to be an approximation that looks good in most cases." But it has nothing to do with how light works in real life. Yet every engine multiplies colors because the alternative is too computationally expensive -- and it still wouldn't produce realistic results because we don't source our art from real life. Artists typically control the content, and any art-driven pipeline is doomed to look pretty good but not real.
This isn't true.
> I'll give you a simple way to defeat any question of whether we know how to make something look real: What does it mean to multiply two colors? If you chase down the logic, the true answer is "It's meaningless."
It's not meaningless at all, it's a close approximation of light absorption that happens to be measurably and perceptually indistinguishable from the physical spectral absorption process, save for a few corner cases that are being worked on. If the result is 99.9% accurate physically, and 100% perceptually indistinguishable to humans, then it's a valid predictive physical model. It has everything to do with the result of what light does in real life, and if it didn't, we wouldn't be using it to approximate light.
Your thinking here seems to back up my suggestion that you might have missed out on some of the recent developments. Subsurface scattering, for example, is modeling absorption more phyically, and replacing the simple multiplication with a simulation process, because we now have the computational power to do so.
Your argument is attacking the non-simulation aspects of rendering without addressing whether a simulation that is simpler than real life is acceptable. If I can't tell the difference, does it count? Because I'm 100% certain that color multiplication is not the problem when it comes to CG not looking real.
It's amusing that you brush it off with "Oh, there are a few corner cases." Those corner cases are why it doesn't look real.
And no, it's not 99.9% accurate. You may be thinking of constrained scenes, where e.g. you shine a laser on a substance of a specific color and then measure the resultant color combination. But the complexity of real life defies such analysis.
If you're going to say I've missed some recent work, you'll need to cite sources. Then we can debate those.
EDIT: To clarify:
Your argument is attacking the non-simulation aspects of rendering without addressing whether a simulation that is simpler than real life is acceptable. If I can't tell the difference, does it count?
My argument is that if you get a bunch of people together, show them simulated video and real video, and ask "Which of these are simulated?" they will correctly identify the simulated video as "not real" with significant accuracy -- given modern techniques, probably >95% accuracy. The simulation needs to be of a non-trivial scene, like a waterfall or a valley. When you show real video side-by-side with simulated techniques, there's no contest.
If we truly knew how to make simulated video that looks real, without mixing any real-life footage, then the observers in the above scenario wouldn't be able to do any better than random chance. But they can, because we can't.
You're attacking my arguments and getting more hyperbolic without any examples. What specifically doesn't look real? What are you actually claiming? What is your criteria for whether something is "real"? What corner cases are you thinking of that cause color multiplication to break down so frequently that it's a bad approximation most of the time? Can you give some examples of state of the art CG that intended to improve realism but doesn't look real?
I'm not claiming that everything looks real, nor that all CG is realistic. I'm claiming that CG is getter better over time, and that some things are already indistinguishable from real. The number of CG things that look realistic is going up over time, and it used to be 0. There is a trend here that reality contradicts your original basic thesis that nobody can rendering something realistic.
My 99.9% number wasn't a claim, it was a made up number (which I thought was obvious, sorry). I said "If the result is 99.9% accurate... then it's a valid model" to back a point: the point is that if multiplication is predictive then it's a valid model. That's how all of physics works. Acceleration under gravity is an approximation.
You haven't demonstrated that multiplying doesn't work, you've only stated an opinion. I'd like to see some examples of what you mean, because it appears to work very well from where I'm sitting. The colors of grass, bricks, wood -- diffuse materials -- is very closely approximated by multiplication, enough that we can in fact measure how good the approximation is, and humans cannot tell the difference. Therein lies the problem with your argument -- if I can't tell the difference, that is my definition of realistic. It doesn't matter what happened under the hood. You seem to be claiming that only reality is good enough to be realistic, because anything else is cutting corners.
I'm not sure I understand what you mean about real life defying BRDF measurement. One of the ways that CG is getting more realistic is precisely through various gonioreflectometers, some of which shine lasers and measure the output from all angles. Material catalogs are currently being constructed and sold to CG companies using higher and higher resolution measurements of exactly what you're claiming isn't possible and doesn't help. People buy them because they improve realism.
> My argument is that if you get a bunch of people together, show them simulated video and real video, and ask "Which of these are simulated?" they will correctly identify the simulated video as "not real" with significant accuracy -- given modern techniques, probably >95% accuracy.
Every year Autodesk runs the test "Fake or Foto". http://area.autodesk.com/fakeorfoto Less than 10% of people are getting them all right this year, and a considerable number of people are under the 50% line. This isn't scientific, of course, but see if you can score 100%. This is an indicator that CG is pretty good. Will you admit it if you don't score 100%?
Earlier you made an argument that stills are looking okay, but moving things aren't. The problem with that argument vs color multiplication is that color multiplication is used on stills, so if that's what's breaking down, stills should be obviously unrealistic.
You haven't demonstrated that multiplying doesn't work, you've only stated an opinion. I'd like to see some examples of what you mean, because it appears to work very well from where I'm sitting.
It's not an opinion that multiplying colors has nothing to do with how light behaves in real life. I even said that it was an approximation that works fairly well, so if you're going to simply ignore the things that I did say, this discussion isn't going anywhere productive. The point is that it's an approximation, and it's partly why we subconsciously recognize simulated video as fake.
Every year Autodesk runs the test "Fake or Foto"
Obviously, photos don't work. It doesn't pass the realism test. This conversation is about video -- the human visual system processes video completely differently. It's not just a matter of taking still frames and stringing them together. The test is invalid. If you use video (of non-trivial length, with non-trivial scene complexity -- any nature video will do fine), you'll see the participants' accuracy skyrocket to nearly 100% correctly identifying simulated video.
If you're truly curious about the reasons why a simulated video looks fake, look into some books about the neuroscience of visual processing and color perception. One of the fundamental tenets is that colors affect colors around them. To make something that looks real, you need to get the colors exactly right. Even a small departure from reality will ruin the entire effect. That's partly why multiplying colors is problematic, since it results in a departure from real life behavior. The other half of this is to ignore any test involving still frames. We don't perceive still frames the same way as video -- it's why video compression is different, for example -- so we can't use stills in any test of realism.
Whenever someone points out that we really don't have a clue how to make simulated video indistinguishable from real life, someone comes out of the woodwork to point out all the reasons why it's right around the corner. That's been false for a decade, and it's not looking any better for the upcoming decade. It's easy to prove me wrong: Get a bunch of simulated videos together and show them to observers, mixed with real videos. They'll spot the real videos every time, if you don't use constrained or simplified scenes. Nature videos work well.
It seems like people just don't like the idea that graphics programming is a bag of tricks. They want it to be deeper. But you can throw in all the physically-based techniques you want, and the resulting video still won't look real.
I have to go to an appointment now, but maybe we can continue this in a few hours if you want.
Ah, I see the problem. You're right. I thought I was debating the idea that "nobody knows how to make anything look real.. no one knows." I just checked, and your first post didn't say anything about video. Your second one mentioned it in passing, but I didn't realize it was a constraint on what I could talk about. I see why I'm confused, and why I'm confusing you. I'm sorry! Honestly. I am indeed thinking of some other things besides 100% fully simulated video of nature that is unconstrained, when I try to make the claim that some people do know how to render some things realistically.
Here's a pretty good CG video, in my opinion. Which parts look fake to you at a glance? http://vimeo.com/15630517
> You appear to be offended by the idea that we can't create simulated video indistinguishable from real life. But we can't, and I've given you an experiment that will prove that we can't.
That's a negative. Personally, I don't think I can't prove a negative, with any experiment. Are you sure it's provable?
Here's the core of my argument, the part that I thought I was debating. I think realism (undetectable to people) has been achieved with: material samples, constrained physics simulations, stills images of architectural scenes, limited still images of natural scenes, elements in video (mixing live and CG footage), fully CG video environments for short periods of time, humans & faces but only in fairly constrained situations for short periods of time. I don't think realistic humans have been achieved in general. I do think realistic simulated video - that meets your criteria - will happen eventually, and I don't know when or claim anything about when.
> Obviously, photos don't work.
But you can demonstrate some color multiplication problems in fake photos, right? You're ruling out still images yet the only problem you've cited is one that affects every single pixel of all still CG images.
> Get a bunch of simulated videos together and show them to observers, mixed with real videos. They'll spot the real videos every time, if you don't use constrained or simplified scenes. Nature videos work well.
Okay, fair enough. I don't know what "constrained or simplified" means. Your goal posts could be anywhere, so I definitely can't win. I don't think this is easy though -- the best CG is very expensive still, making something that looks very realistic is difficult. I could agree here and now that no CG ever rendered yet passes the unconstrained environment and complexity test when it comes to realism, and I would agree that realism is easier to achieve the more constrained and simplified the scene is. My argument is that the threshold for where too complex triggers unrealism is moving in the direction of more complex over time.
> It seems like people just don't like the idea that graphics programming is a bag of tricks. They want it to be deeper. But you can throw in all the physically-based techniques you want, and the resulting video still won't look real.
Now I'm getting really confused. Graphics is a bag of tricks, I don't have a problem with saying that, so I don't know which people you're talking about. Those of us practicing graphics have been saying that all along.
But you're saying that it can never happen? Using all the physically based techniques now existing and ever to be invented, it will never happen? I could simulate reality, and I won't ever get there, no simulation will?
I can see that you've thought about this a lot, and I can see that you know a lot about graphics. I honestly thought you were saying we're not physically based enough yet, and I was trying to show how we're getting there, but now I'm not sure I understand what your claim is, or what we're talking about. I do suspect we're getting down to what my friends call the "dictionary problem" - agreement that is accidentally violent due to miscommunication over a few words.
Parts of the video you offered look very realistic, but the content breaks the sense of realism, e.g., vegetables shattering into tiny pieces, rocks tumbling upwards, etc.
i link to some examples in other posts below too.
i read this discussion as sillysaurus3 arguing the wrongness of ALL models, while not acknowledging dahart's examples of utility: constrained/simplified for some is good enough for others.
> Subsurface scattering, for example, is modeling absorption more phyically, and replacing the simple multiplication with a simulation process, because we now have the computational power to do so.
Surely everyone has to accept that whatever we do here is always going to be an approximation to arrive at a similar looking result to reality that won't be following what's really going on to a low level? We're not going to be able to model individual photons, atoms, electrons, quantum effects etc. so it's always going to be an approximation/abstraction.
The goal should be whether it looks indistinguishable not if it is indistinguishable.
Modeling individual photons is happening in research rendering code, just not on the same scale as reality. Path tracing renderers using spectral colors & modern shading models are getting there. We might use a few million photons at a time to render a room instead of the quadrillions that reality uses. People are thinking about modeling individual atomic interactions, electrons, and quantum effects. There might even be experimental renderers already that do this, I wouldn't be the least bit surprised. If not, it'll happen pretty soon.
BTW, there is no super distinct line when you're talking about the simulation of atomic interactions and electromagnetism, even for the 100% full simulation of all subatomic particles individually. I'm not sure that's possible yet, but I am pretty sure its unnecessary for "realism" in the context of film making.
There are shading models that account for electromagnetic & atomic effects. Physically based shaders obey Helmholtz reciprocity and energy conservation, among other things. Some of them account for multiple atomic level reflections. This is a statistical way of accounting for atomic effects, just like color multiplication is a statistical way of accounting for spectral absorption. New shading models every year are accounting for smaller and smaller margins of error from the previous approximations. Check out this crazy shading model from 25 years ago that is based on electromagnetism theory and accounts for atomic level shadowing & masking: http://www.graphics.cornell.edu/pubs/1991/HTSG91.pdf I think this paper still wins the award for most equations in a Siggraph paper. It didn't win any realism awards though. :P
I wouldn't say that in a material model you're actually multiplying colors together. It just happens that when you record the amount of reflected low-, medium-, and high-frequency light under a spectrally flat light source, you can display these data as light and and perceive a color.
With fancier equipment, you could record the full reflectance spectrum from every angle -- and polarization, too, why not -- and be able to predict what the (simple) material would look like under every light source. Maybe you're worrying about fluorescence, but then just record the data under varying monochromatic light sources.
If you're just talking about how RGB is spectrally anemic, sure, that's why lighting your house with only R, G, and B LEDs makes everything look so strange.
Spectral multiplication, as I understand it, is just a consequence of the law that if you increase the intensity of received radiation by a certain factor, then the intensity of the reflected radiation will increase by the same factor. Is there some important nonlinearity I'm missing? Where is the meaninglessness?
You can find lots of articles like "10 Scenes You Didn't Know Were CGI" https://www.youtube.com/watch?v=61ETzC1UbM4 I doubt this meets the criteria or will satisfy @sillysaurus3, they're almost all a mix of real footage & CG.
http://area.autodesk.com/fakeorfoto is one way to see if you can identify stills with 100% accuracy. I didn't.
on the blue ray dvd of pixar's finding dory, closely examine the sand and water (ignore the living creatures) in the first few minutes of the short film piper. you know a-priori it's not real, but i'd like to hear people describe what they think looks off with the environments (not the creatures). try to normalize that with the intent of a director trying to craft an identical ambiance from a recorded video (color temperatures may be made warmer in post, increased saturation, more vignetting, etc).
in big hollywood vfx house showreels, there's a lot that's obviously cg (again, we're still off on creatures), but what about the environments within?
in particular, i don't think people watching that night-time sequence of deepwater horizon in theaters would identify it as entirely computer generated.
I've written about this before:
The only technique that we know produces realistic video is if you mix actual, real footage with simulated content. That's very effective, but it's unsatisfying for obvious reasons. I think it hints at a way toward fully simulated realistic video, though.
I did just read your previous posts, and it sounds like you have a pretty interesting history of working on all of this stuff.
There are so many interesting aspects of modern computer graphics that have nothing to do with rendering realism. My favorite is visualization, or using graphics to create a visualization of a process that is not visible normally like fluid dynamics. But there are lots of others.
I think that if you start from the most featured APIs you end up jumping in with a hugely steep learning curve that really hinders learning. Starting from the basics, very simple OpenGL or WebGL rendering, learning the algebra behind projections so you can think about what you want to see, and then the notions of texturing and shading, all help get from not knowing anything to knowing something much more efficiently.
Unpopular? By whom? Computer graphics is a crafty bag of tricks sprinkled on top of very concrete linear algebra and computational geometry.
That said, it's actually hard to create appealing images just in plain old photography, so anyone thinking computer graphics would facilitate any artistic aspects is in for disappointment.
The math needs to work, but you need artistic talents to make anything look good/real.
Yeah. Computer graphics is all smoke and mirrors... except smoke and mirrors, because those are difficult :-)
The real world is high dynamic range, but human vision is much more limited, but also spacially variant and adaptive. So we get multiple optimized "snap shots" of a scene rendered in real time and mapped into what our brain considers one scene. Look toward high key high brightness area of a scene, retina gain function clamps down the iris closes and you get properly rendered details in those areas, look toward a tree shaded bench at a rock under that bench and iris opens, gain function increases and again the shadow details are rendered. Cameras are just starting to do this.
But that's capture. It's still HDR. Now you have to re-render all of that if you're going to reproduce it on a lower dynamic range device like a computer display. There aren't that many HDR screens on the market yet but those are also coming, and it means we don't have to do that next stage re-rendering which is a lot more subjective and still actively researched why some renderings look real and others look cartoonish.
And the same would be true for computer generated graphics rather than captures.
It makes advanced techniques infinitely easier. There is stuff you just couldn't do on the fixed function pipeline without a vendor-specific extension. But it makes for a huge learning curve for simple stuff.
For that reason, I general suggest people start with a game engine like Unity or PlayCanvas, or if they are already experienced programmers, a scene graph like Three.js.
Why would you this though? You'll restrict yourself to a comparatively tiny market (10% PC market share, 20% smartphone market share), and your low level knowledge will be somewhat nontransferable.
This is better than starting directly in Vulkan/DX12, quickly getting overwhelmed and giving up. Plus it makes the step from GL/DX11 easier.
I bought an iPad last month just so I can play with Metal on an actual iOS device at home :)
I don't think hobbyists care about market share that much; they want to get hands-on and learn, and Metal is a rather fun API to work with.
My copy of CGPnP is from 1993. Math doesn't get old, we just build more on top of it ;-)
Vulkan is pretty boring to be honest. I'm very doubtful that you want to bother with it if you're just starting. Using these APIs is like filling out tax forms, but Vulkan is a lot more effort than OpenGL (but, OpenGL is weird...) Learning shaders in OpenGL will be pretty transferable knowledge. Vulkan uses a lower-level assembly-like language, SPIR-V that doesn't pretend to be C (like GLSL) but there are GLSL to SPIR-V compilers. I can't comment on DX (I haven't used it.)
Here's a tutorial on drawing you first triangle: https://software.intel.com/en-us/articles/api-without-secret... . Note that there is a lottttt of code to do this (this is part 3...) I really think you could learn more about modern graphics with glBegin(GL_TRIANGLES) and writing GLSL shaders than boring yourself with Vulkan.
If you do want to play with Vulkan there is this book (piggy-backing off the rep from the old "OpenGL Bible") https://www.amazon.com/Vulkan-Programming-Guide-Official-Lea... . I can't give it an honest review because I got bored; I plan to pick it up again later...
Taking for example OpenGL, in the old days you could just download a NeHe project to use as a framework to learn from. To render basics you just specified triangles by their vertices with e.g. glBegin(GL_TRIANGLES) as Jacob says. In a small number of days you could have been close to the front tier of graphics development.
But then it became far more complicated as graphics cards added tons of optimisations to improve speed. Instead of just specifying triangles you then had to create lists, indirect lists, bindings etc. Now as per his example, you have to add a ton of extra boilerplate to just render a simple triangle. And if it doesn't just work it's very hard to figure out why.
What you can also do is use an engine like Unity, then just set it up so that you add some direct OpenGL calls in to create and render objects (I use it in this manner for some dynamics things, when I don't want to create GameObjects for Unity to manage). But that's not a very "pure" way of learning graphics development. I guess it also comes down to what you want to do with the knowledge in the end.
It's true the modern OpenGL pipeline is a bit more boilerplatey.
But that small amount of extra code means it's vastly more flexible. Rather than programming stuff into a fixed pipeline that can only do a limited set of predetermined things, everything is centred around shaders you write, and they can do anything you want. The boilerplate is just needed to compile the shaders, and feed information to them.
Also, while it's slightly more code you're writing, the amount of code executed on the CPU is less, because mostly you just set up a draw call and let the GPU do the rest. Much more efficient.
And the boilerplate is small, if a little intimidating. This stars effect is 156 lines of JS code (of which only about half is actual WebGL stuff), and 24 lines of GLSL shader code: https://hoshi.noyu.me/
(I should add that I used to feel the same way and really didn't get modern OpenGL, but after a few abortive attempts I finally grokked the basic concepts, and once you have those, it's plain sailing, because most things are just small changes to shaders.)
1. What's all this matrix crap? I can take object and camera coords and calculate out a screen coordinate easily with some simple algebra and trig.
2. Oh, OK. Use matrices properly, and you can get rid of some repeated operations.
3. Classic GL mostly takes care of the matrix operations for me, and leaves me with a simple-ish interface for modifying the state machine (goodness, there's a lot of data to feed into it though).
4. Modern OpenGL: Erghh, boilerplate!
5. Build some utility functions and bootstrap yourself out of there, and the interface is so much more powerful. Sure, you have to explicitly specify some more things. Another way to put that is that you can change things that used to be implicitly defined. It's a wonderful bit of freedom, actually.
Rendering a triangle to the screen is easy; it takes understanding the unique way these tools work, but at the end of the day it's not complicated stuff.
This is just from fiddling around in my free time and reading a lot of tutorials, but here's a WebGL renderer that will render a 40x40x40 block of semi-randomized cubes. You can clone that and open index.html in a browser. WASD work about as you expect and spacebar locks the mouse.
regl is cool.
OpenGL works pretty much the same in very language and so much legwork ends up through matrix math. Once that starts to click that's the heart of a lot of graphics tech, even if it's not state of the art.
The playlist for WebGL is https://www.youtube.com/playlist?list=PLPqKsyEGhUnaOdIFLKvdk...
I'm also covering 3D Math fundamentals now.
My channel https://www.youtube.com/user/iamdavidwparker
- The siggraph courses on shading . The 2017 course will be on 30th of July . Current techniques from AAA engines.
- Papers from JCGT .
- Plus the paper collection from Ke-Sen Huang linking all graphics related conferences 
have fun reading
The basics node has the best resources for learning the subject.
If there was a genuine interest in helping the transition to the newer API's, the different parties writing and implementing today's API's would publicly make available the code for emulation layers to the older API's.
If you're interested in computer graphics in general, and googling "modern computer graphics", then the Vulkan/DX12 APIs aren't super important, the fundamentals of CG have not changed at all. The paradigm shift with those APIs is centered on performance, not on new concepts. You can learn vast amounts of computer graphics by writing a ray tracer or animation program or using off the shelf renderers, and never touch Vulkan or DX12.
There are definitely lots of great suggestions here, but it's a wide variety, because it all depends on what you envision or hope to do. It doesn't have to be fully formed or thought out, but if you had some inking like 'hey I saw this awesome procedural animation on Vimeo and I want to learn how to do that' or 'I'd love to work for Valve someday... what steps do I have to start taking?' or 'I was thinking I should add some 3d to my website' or 'I want to be a film animator', if we had a little more insight on what you're hoping, we can definitely get answers that will be more focused and helpful.
I also have a secondary goal, which is to move to another industry, as I'm starting to get a bit disillusioned with front-end web development. That's in the future though, and CG would be only one of many different fields I'd be considering.
For non-real-time there is: http://www.pbrt.org/
For real-time there is this: http://www.realtimerendering.com/
Neither book gets into the details of the specific API, they are theoretical.
Real-time techniques change nearly complete about every 10 years because GPUs get faster and render more techniques possible and obsolete the older less good looking techniques.
You need to join the org first, https://github.com/EpicGames/Signup
1) To make games (commercial games) or other realtime graphics application?
2) To make 3D rendering software or other semi-non-realtime software?
3) To have fun / learn?
If you just want to learn/have fun, my personal recommendation is to write your own software, from the ground up. Ignore DirectX and OpenGL (unless you need them for a context to get pixels on the screen).
Use a fast, native language like C/C++.
Teaching yourself and exploring is 100x more fun than reading a book, IMO (at the same time, you may learn faster using a book).
Try working with a naive projection algorithm to get 3d points on the screen. Like y' = y +/- z (Zelda-esque bird's eye view). When you are comfortable with matrices and vectors, learn "real" projection algorithms.
It's a great overview of the subject. I /really/ like that each chapter is fairly short, you have a working example at the end of each one, he explains both the concept and why he approaches a certain way in code.
I should also mention, learning raytracing won't directly help if you're trying to learn realtime computer graphics.
Do any of these new Vulkan (or Metal or DX12) techniques actually exist yet?
Edit: Maybe covered WebGL too.