Filament: Physically-based rendering engine 599 points by corysama 6 months ago | hide | past | web | favorite | 125 comments

 Kudos to Romain Guy on the Android Graphics team. Stunning WebGL demo here:http://www.curious-creature.com/2017/08/14/physically-based-...Good example of visual state-of-the-art on Android is probably the ARK dinosaur game. The Vulkan API is now included on latest Android. And with the compact gltf 2.0 format we can expect great experiences on mobile ;)Vulkan glTF 2.0
 The WebGL demo was the initial prototype of Filament :)
 Quick question: Is WebGL support dropped in the final release?
 The engine was never intended to support WebGL. We only used WebGL to quickly prototype the lighting system.
 glTF is pretty great, to be honest. At work I am writing a tool to convert Creo View models to glTF 2.0 models and I am experiencing actual joy while doing it.I looked at the COLLADA and glTF formats (I wanted something open) and very quickly chose glTF 2.I don't have any 3D format implementation experience, and I implemented glTF 2 in Golang in a day, or less.Anyway, it's a great format (AFAIK) and I wish more 3D modeling tools supported it. I imagine more will come on board as time passes.
 It's important to separate the idea of editor formats vs. delivery formats. COLLADA/FBX vs glTF is very analogous to Photoshop PSD vs. PNG. Even if you could, you wouldn't fill a website with PSD-format images, right? Right? But, there have been misguided WebGL apps that tried delivering COLLADA/OBJ/STL files to web pages.It's understandable. Before glTF, there really wasn't a good non-secret/proprietary format with which to deliver 3D content. But, now there is! Use it!
 I don't need 3ds Max to support glTF as an editor format, I just need it to support the format at least as well as it supports minimal OBJ, which is the lowest common denominator of 3D file formats.GlTF is a runtime transfer format and that's mostly how I'm going to use it. Others at my employer need to do VR work, and sometimes that requires the geometry to make a trip through something like 3ds Max. It would be nice if Autodesk acknowledged the existence and utility of glTF. People want to move away from FBX so Autodesk are unlikely to provide glTF support until it is demanded of them by a large enough portion of their audience.Once I am done with this Creo View stuff I will look at writing a 3ds Max import plugin myself. It would definitely get used by my employer.
 I am a 3ds Max user, excited about gITF, and would be interested in contributing to an importer. Is your Creo View project by any chance open-source? Would your Max importer be open-source?
 No, but I bet someone could create a Creo View converter if they have access to PTC partner downloads. Specifically, the Creo View Files Tools that includes command-line tools to convert Creo View files to their ASCII representations.Writing an importer for those ASCII files is easy, and when you have the binary and ASCII versions of the same files side-by-side, working out the binary file format is made much easier.I haven't even begun work on a 3ds Max importer.
 Any thoughts on these criticisms from this Wikipedia article? https://en.wikipedia.org/wiki/GlTF#Criticism(I'm not agreeing or disagreeing with them, I'm genuinely curious to hear from someone working with the format at the implementation level)
 I'm a few years removed from 3D work so take this with a grain of salt especially since today was first time I've seen GITF. I've implemented several 3ds Max Game Asset importer/exporter tools for modding over past decade. While I like the look of GITF 2.0 quite a bit, it does seem to lack a number extension points that I would probably need for a generic importer/exporter for handling game asset round trips (import, edit, export). I would love to handle import/export in a generic way and have tried with FBX and collada to poor effect in the past. Really want to take asset convert to GITF then import to max edit and then export from max to GITF and then finally bake back to game format. The less stuff software you need between the game format and the GITF file the better but should allow for automation/customization.You can get 80-90% there but miss quite a bit of fine detail needed as a interchange format (which is why this is a transmission format I guess). Extensions seem to be basic and I question how well they work in 3d editors but not really sure how that works and is probably editor specific. I've had a lot of issues trying to have a single file like collada be properly supported in multiple editors unless it was fairly basic model. That leads to a lot of wasted time for modelers trying to get the edited model back into the game and working correctly.Back to GITF, collision seems to be missing. They admit that they dont have vertex compression implemented or animation for things like material colors or animation metadata (timelines, looping, ...) . Is there a way to describe a partial surface sort of like skinning for other reasons (for example Fallout 4 needs to tag certain surfaces/vertices as to say that they are part of the head or arm for decapitation in game). Anyway looks like a nice start as its all JSON (or binary) unlike GEX,obj,3ds,... which seems to need its own parsers to read and they did only claim to be an transmission format. Again only did cursory read of the format so I might have missed things.
 It's glTF not GITF.
 I've always assumed that was an "L". Whoever named this thing and made the decision about capitalization needs to possibly think a bit harder next time.
 It is an "L", glTF means "GL Transmission Format".
 In that case it's still the capitalisation that is remarkably ill considered.
 Gltf support skinning and morph targets. It also support vertex compression via DRACO.
 I wonder if some of those are criticisms of glTF 1, rather than 2. GlTF 2 is a lot different and much more fully featured than v1 was.Scene structure in v2 offers everything Creo View offers and pretty closely matches what is available in the Unity3D hierarchy, from what I've seen. You assign one or more meshes to a node element (rinse and repeat) and you arrange them in parent & child configurations. Transform matrices of parents pass down to children, and so on. If a child node has its own matrix specified, it is multiplied with the computed parent matrix, resulting in the final transform applied to whatever mesh(es) defined in that node. Exactly the same as what Unity or Unreal do.So the scene structure criticism doesn't hold water for me.Same for materials. GlTF v2 supports PBR materials with or without texture maps, normal maps, occlusion maps, and emissive maps. All of those are optional and you can set things like color and normals, opacity, and emissivity all without textures if you want.Most glTF tools out there, including a lot of open source things like blender importers and exporters, assume that there will always be textures to define those material properties, and completely ignore non-texture material parameters like BaseColorFactor, which is incredibly annoying because NONE of the models at my work use textures for anything.Things like lettering on placards is modeled, not textured, and no UV maps exist for anything. This will be an interesting/infuriating problem when it comes time to view these models in a game engine in a production setting. Optimization in game engines relies on things like textures and UV maps.To that end I have a to-do item on my list to simply generate small map images for these things and set give each material one pixel on the maps and make all UV coords for a mesh with a given material point to that pixel. That will probably work, and it will probably appease the half-implemented open source stuff I've come across. Runtime lightmaps will not work this way, however.I can't speak to the animation features of glTF, since I'm dealing with CAD data. Our models are individual parts assembled into whole assemblies or vehicles at runtime, and not stored as whole models.
 > Optimization in game engines relies on things like textures and UV maps.I think for lettering and similar things (high-complexity but essentially flat) you're right. You could probably use quads here to give you easy UV mapping and script the creation of textures.What else have you modelled which would ideally be done via textures in a real-time engine?
 Embossed letters that are geometry in CAD would be much better in a realtime engine as a normal map, lots of things like that.A full fidelity CAD model for something like a bolt can easily be two million polygons if the draftsman isn't careful.
 any mirror for the WebGL demo? its down
 Vulkan might be included on Android, but good luck getting a version that actually works across all devices.
 AFAIK Vulkan is still optional in latest Android.
 Wow. It's so hard to find expert-level documentation like this all in one page these days. I'm usually stuck piecing together information from a couple dozen (often poorly presented) websites. The topic is fantastic, but the presentation deserves some applause too!
 Glad you like it. We found it hard to piece together all the information we needed and after reading hundreds of papers and presentations we figured that writing such a document would be helpful to beginners like us (in this field that is).
 I think my render in Chrome is pretty broken. Here's a screenshot of the top of the page: https://drive.google.com/open?id=1YVtcV-J0JrDyW2pJ6vwgEDsgjO...
 Looks like the page wasn't able to load all of its resources. Try to refresh the page until it works. I should make an offline version that doesn't require online processing.
 It took multiple reloads for it to work for me on Android Chrome, and was almost unreadable for a couple of minutes (MathJax processing? Tried KaTeX?) due to an enormous right margin. The margin and font(?) changed multiple times.Is there a PDF version?
 Sort of!The document uses a JS tool called MarkDeep to convert extended MarkDown into styled HTML. Just look at the source, it's 99% plaintext Markdown.If you read the MarkDeep docs, you'll find that it has a feature to convert the source into PDF instead. I admit I haven't tried this, though.If it were up to me, I'd do the transform offline and just serve the static HTML instead. And sure, a link to the PDF just to be nice :)
 We just serve straight from the source tree so it's not out of date :)
 I made an offline version by opening it in Firefox and printing to PDF. Seems to look fine.
 Maybe add onerror tags to JavaScript and CSS resources and show a warning if something fails to load?Why is it still so hard to design a page to load reliably?
 The PBR tutorial series on LearnOpenGL.com covers a lot of the concepts implemented here like physically based BRDFs and HDR lighting, if anyone is looking for more resources like this.
 If like me you actually enjoy tech talk videos, "SIGGRAPH University - Introduction to Physically Based Shading in Theory and Practice" is great. PDF version is linked at the bottom of your learnopengl page. https://www.youtube.com/watch?v=j-A0mwsJRmk
 Ok this is way fun and drags me back to the days I was diligently trying to build a 3D rendering engine from first principles (sort of, I had Glide to put stuff on the screen).The renderer is perceptually better than the one that is included in my CAD package (TurboCAD) for pretty much all materials. So I'm guessing they will snarf it and dump the proprietary renderer and replace it with this stuff if they can.But the really interesting idea that popped into my head was this; could Google offer 'rendering as a service'?Specifically they have a zillion machines, many of which are doing nothing important, and they have this rendering package, and they have a scheduler that can put things on any machine. Imagine a service where you sent them a suitably detailed model description, and a set of lights, could they send you back a rendered image? Could you parameterize changes to the model description over time so that they could send you images in time based on your models? Could they do say 480 renders 'free' per month and then maybe \$0.19/render over 480 in a single month?Could you create a studio of modellers who would design models, and animators that would animate those models over time, and a director who would compose those animations into scenes? This is basically Pixar without the expensive renderfarm. Does that enable new studios to bring their own vision to life? Does it offer a cost effective service to places like Pixar which allows Google to make money on otherwise idle resources? Curious minds want to know :-)
 Okay that is pretty close, and acquired by Google in 2014. So presumably it is possible to be a small CGI shop and use this as your back end. Now I'm wondering if the economics pencil out. Clearly there was something that motivated Google to buy them.
 They work just fine, the product is one of many that's used by shops now. GCP even recently launched a LA region and had an entire day dedicated to the local VFX shops. Nothing stops you from just running a bunch of VMs which many do and they have rolled out the Filestore NFS to make shared disks easier too.
 Is that a recent shift? I remember talking to someone from a VFX shop a few years ago, and they made it sound like back then anything cloud was a no-go due to their customers requiring material to stay in-house for fear of leaks.
 Yes, smaller studios were early but now the majors like Sony Imageworks are all in. The cloud is a great fit for most of their rendering jobs and security is no different (if not better handled by the cloud).Here's a session from GCP Next 2018 for cloud render farms: https://www.youtube.com/watch?v=ODOJ3UbnV6Y
 Recent enough - at a Google event earlier in 2018 it was mentioned that GCP was now compliant with most (if not all) of the major studios’ security requirements.
 OTOY has been doing this for years, and now working on a blockchain approach to utilize spare GPU cycles from third parties who have cycles to spare.
 Of course. With Fusion 360 you can have your stress calculations done in the cloud.
 Lagoa [0] used to do cloud based rendering as a service. But Autodesk bought them out and shut them down.
 I don't think a large rendering farm is as expensive as a huge amount of animators and artists who are talented enough to saturate the render farm.However I have no knowledge in the industry, just a gut feeling
 I don't know how useful it really is at this point, but Golem* currently offer rendering as a service.
 I've used Golem and it works well and is cheap. Only issue is buying GNT which is a pain.
 I'm still waiting for someone to do a completely physically based renderer, and simulate the whole EM spectrum, not just the 3 colors we can see. Then you can model chromatic aberration, and the difference between fluorescent and incandescent lighting, and prisms.. you could call it a really really physically based rendering engine.
 Spectral renderers are pretty common. A naive implementation just assigns a random wavelength to the ray when tracing, which results in surprisingly little extra code to achieve all spectral effects such as abberation.The downside is that this is slow. An RGB-ray traces the whole spectrum for one ray (approximated by an RGB triplet). The single frequency ray does not. If you try to send a binned spectrum in the ray, what will happen is that as soon as you hit something refractive, they will go in different directions, so you no longer have a full spectrum. All the complexity in a spectral renderer over an RGB one lies in handling performance: How to avoid the performance problems when you lose coherence. Apart from that it’s actually often simpler than an RGB renderer despite being able handle all those fancy spectral effects.For a nice “reference” or “tutorial” one that isn’t plagued by the performance hacks needed to make it viable for large scenes, look at this one: https://github.com/TomCrypto/Lambda
 Wow those diamonds and sky. I think I might be fooled by the diamonds
 If I didn't know, I'd think the diamonds are a photo.
 And if I would not know better I would still believe all the images in IKEA catalogues and all the shiny cars in the car commercials were real. They are not. IKEA started rendering product images about a decade ago and the portion of rendered images is ever increasing. I don't know of any recent car commercial in which the product was filmed instead of rendered. The interesing question is: who does even suspect?
 The Mill Blackbird is one of the most profoundly weird objects in the modern world.
 There are several spectral renderers out there, such as Weta Digital's Manuka. I don't know if they bother with parts of the EM spectrum that are outside of the visible range though. I imagine UVs can be important to model in some situations.
 To model fluorescence accurately, it's necessary to handle the non-visible spectrum in some way.
 Handling non-visible spectrum isn't much of an issue, after all the wavelength used when path tracing can be whatever (some have used path tracing for sound). Though getting realistic data for non-visible parts may prove tricky depending on the material.IIRC the issue is that if you can ignore fluorescence, then reflection is simply an element-wise multiplication of the incoming light at the wavelengths under consideration[1] with the reflection coefficient of the material at those wavelengths. With fluorescence, that turns into a matrix multiplication, with obvious speed implications.If only a single wavelength is considered at a time, then the wavelength must change upon reflection, otherwise there's no way for the fluorescence to occur. That can also have performance issues, for example conversion coefficients to/from regular color spaces needs to be recalculated.At least that's my understanding having worked on a physically-based renderer which did do spectral rendering but not fluorescence.[1]: using for example binned wavelengths or stratified wavelength clustering.
 I think Mitsuba can be compiled to use (binned) spectra instead of RGB.
 When is Wenzel going to release Mitsuba 2? The current version of Mitsuba is in bug-fix-only mode. :( I loved Mitsuba's python bindings, made it super easy to pragmatically do cool renders.
 I've long wondered how the world would appear if we had a different set of cone cell photoreceptors and if a full spectrum engine could be used to simulate that by projecting a different set of spectrum ranges to our eyes. It would still be "seeing in RGB" of course, but entirely different aspects of the world would become visible. I can already do this when converting an image to grayscale, giving different luminance weight to the different channels. It would be really interesting to arrange weights on the entire EM spectrum.(I'm reminded of some company that developed a pair of glasses for color-blind people which enabled them to see "color". I think it was a similar princple.)
 Multiple easy-to-understand things will happen :- colors that were previously the same could be now perceived different (a red car, a red flower, red light from LCD screen etc...)- White / grey would still exist but be less common, as a lot of white / grey light would now be close to a new receptor- Probably more distinguishable shades in the rainbow
 >- White / grey would still exist but be less common, as a lot of white / grey light would now be close to a new receptorGranted, our trichromatic vision is likely adapted to the scenery of this earth. We are most senstive to hues of green for instance. That being said, all things equal, wouldn't new grays appears with the new cone sensitivies just as the old grays would now yield a color response?>- Probably more distinguishable shades in the rainbowWhat I'm curious about is those pure hues we perceive that are infact the result of composites of wavelengths. Those equalities would break up. Two things that were once, say, equally orange could suddenly have different hues in the new mapping.
 Oh yes maybe the previous rainbow would not be found in the new rainbow, previous colors would maybe be changed that's a good question...I'm even wondering if we all "see" / "perceive" the same colors / have the same response to the same hue (i.e. is my blue the same as yours ?!) wouldn't new grays appears  Maybe, my wild guess is just that gray would be more rare, if we define gray as the neutral color (de saturated = mix of every color, a kinda flat spectrum)
 >Oh yes maybe the previous rainbow would not be found in the new rainbow, previous colors would maybe be changed that's a good question...Well, the prism/rainbow is the pure wavelength specturm, so that's one of the things that probably would turn out the same in the new mapping.>I'm even wondering if we all "see" / "perceive" the same colors / have the same response to the same hue (i.e. is my blue the same as yours ?!)The sensitivity range does vary between individuals. Some women are in effect tetrachromats because they have two sets of, I think it is, Red cones that are sufficiently wide apart. I find this very fascinating.
 Interesting didn't know that, thanks
 This looks very good.Ironic aside: Notice the "Processing math" message on the lower left when you first open the page? That's MathJax, a huge JS library, chugging away because Chrome doesn't support MathML. Try going to the page in Firefox and you'll see that it processes "the math" a hell of a lot faster than Chrome. Firefox supports MathML, and MathJax probably generates MathML for it.
 No it doesn't, it renders it as a bunch of spans. MathML is basically a dead standard for the web - browsers either don't support it or support is incomplete and with bugs which makes it useless.
 Oh, I actually didn't realize how much of the page I was missing until I opened this in Firefox. My page is very broken in Chrome 67.0
 Really beautiful. Funny how high tech 3D engines get better at making things that look low tech and run down.
 It's kind of like Moravec's paradox but for graphics. :)
 Also, there should be a warning in case you're running tab heavy on a lower end machine, or just out of courtesy. The web page is 37 MB (saved as an MHTML in Chrome -- see your About:Chrome flags). It's yuge. MathJax alone is usually >1 MB of JavaScript.
 1. What's the likelihood of seeing a Unity wrapper for this in my lifetime? I imagine the takeup of it if it plugged into Unity or Unreal would be massively increased.2. How does it compare with what OTOY are working on for realtime? I imagine they are focused on quality over performance and aren't even considering mobile.
 Unity and Unreal both support Android and PBR so I don't see an obvious benefit to being able to use this alternative renderer. Because of the license and the fact that it's a newish codebase written in simple C++ I can see it being popular as a starting point for in-house engines.
 PBR is already enabled in most game engines, it's even supported out of the box by Three.js
 Why would Unity use Filament? They have a PBR solution that works for them and is in active development. OTOY's OctaneRenderer is an unbiased path tracer. It's not comparable to Filament, two different use cases.
 Does anyone know how this document relates the the rendering done in the 2016 ratchet and clank game? That game has always astounded me and 8ve never been able to find a deep and comprehensive relation about how the rendering works.
 They certainly use some of these techniques.Here’s a bit of convo about the code and art of R&Chttps://youtube.com/watch?v=Y65h1aO-xpshttps://youtube.com/watch?v=zpvXB4yWvyA
 It's similar techniques, just better because they can afford more effects on a PS4 :)
 It should provide adequate information as R&C 2016 is using cartoonish stylized pbr and many concepts/methods are common among different engines. Though most pbr engines seem to target photorealistic needs, physically rendered materials and lighting make even non-realistic models and environments look better.
 Had to do this:CC=clang CXX=clang++ ./build.sh -j releaseto get it started to compile (as it seems it had cc pointing to gcc and it did not understand some, apparently clang-specific flags).
 Do will this replace Escher? Or do they serve different purposes? (UI vs 3D)
 Such beautiful documentation!
 There is no such thing as physically based rendering, and the sooner the world learns this the better. It’s a marketing term, nothing more.Here’s food for thought: what does it mean to multiply two colors? Nothing. It’s meaningless. It’s an approximation that happens to look good. But if you’re going to claim your engine is physically based, you can’t use it, because multiplying two colors is not based in physics.
 You should look more into how physically based rendering works.You do actually approximate the energy that reaches the different surfaces (in Watt / (m² sr)) for all wavelengths of light using Monte-Carlo integration. You can send probes for different wavelengths separately if you care about the differences in the refraction index of your material by wavelength.Then you integrate the results by folding it with the corresponding color response of the primaries of sRGB or whatever color space you want to use. The color representation can be accurate, though screens etc are obviously limited.
 This is unrelated to the claim that multiplying two colors is meaningless.
 "Multiply colors" in pure physics is ill-defined, but we're talking bout a rendering engine. Engineers are allowed to make approximations whenever they want, and approximations aren't restricted from introducing new concepts/terminology.The closest thing to "multiply two colors" in physics would be taking a product state:|ψ⟩ = |ψ1⟩ ⊗ |ψ2⟩Here, we are expanding a Hilbert space to include both unentangled particles (color == photon).
 Engineers are allowed to make approximations whenever they want, and approximations aren't restricted from introducing new concepts/terminology.Well, yes. But in this case, the approximations are completely made up. They're not based on physics.
 The RGB vectors are more or less based on biology, they're just a toy model[0]. Plus, physics is nothing more than a series of better and better approximations to things we observe -- it's all made up at some point. I'm guessing you're upset that the "made up" stuff in this case leads to no new physical insight and deviates from what we know to be more consistent with observation. That doesn't mean that "made up" stuff is not useful. Toy models are everywhere, even in fundamental research.There's a very good reason why people don't solve the kinematics of a tennis ball as if it were a 10^23 quantum-many-body problem. If you're constrained on time and money, the smartest thing to do is to get the closest result you can in the smallest time with the least amount of money.
 Where do you multiply colors in a PBR engine?
 "Physically based" in context of rendering means just that the light calculations preserve energy of light. So there is such thing actually.
 There is no clear definition, further adding to the confusion.https://en.wikipedia.org/wiki/Physically_based_rendering
 That is a description of unbiased rendering.
 This doesn't seem to be a substantive criticism. Abstractions, even those that are for physics, are done that way as far as it is convenient. Those by physics are the same.Similarly, what does it mean to add colors? Well, kind of non sequitur. Depends entirely on what abstractions you are working on.
 If it isn't a substantive criticism, then it should be easy to address.The claim is precise: It's meaningless to multiply two colors, yet this is the core abstraction that we use in our rendering engines.New ideas are needed to advance the state of the art.
 Arguably you never actually multiply a color by a color. Rather, you multiply a set of light intensities at different wavelengths by a set of coefficients of reflection for those wavelengths. It's just that the reflection coefficients are often visualized by the color of white light after reflection.
 That is as relevant as saying it is meaningless to multiply two apples. Well, yes. But so what? If the definitions of physically based rendering is multiplying colors, please establish that. My understanding is that it had a legit meaning, but yes, you can do some nonsensical things in it.
 "Color" in 3D engine means just a 3-component vector. It's not meaningless to multiply those component-wise at all. Nobody is multiplying the color of an apple by the color of a dog if it's what you meant.
 If it has meaning, then it should be easy to define what it means.In fact, multiplying two RGB colors is about as meaningful as multiplying the color of an apple by the color of a dog. It's certainly not rooted in physics.
 It is, and it's defined in my previous comment - it's per-component product i.e. {a,b,c} * {x,y,z} = {ax,by,cz}. It's used to compute attenuation of light separately for different bands. And attenuation has a clear and obvious meaning - it's a portion of light energy that passes through an interaction of light and medium.
 And yet, it has no basis in reality. Point out the physics equivalent of multiplying two colors.Those bands are arbitrary, defined by human color receptors. If we were dogs, would we be arguing that lighting should be defined as {a,b} * {x,y} = {ax,by}? Dogs can only see blue and yellow. So what's special about r,g,b, and why do we mutiply them together?There's nothing special about them, and it's arbitrary and misleading.
 Are you arguing that the renderer should model a continuous spectrum instead of RGB?Some can, but it’s obviously much more computationally intensive. Outside of some niches, it’s also not particularly useful because the final product will need to be in RGB anyway if it’s to be displayed on a monitor or TV. Even if you were to directly inject the renderer’s output into the retina, there are lots of....wavelength distributions that will produce identical percepts, so it’s not a clear win even there.
 It's actually not that much more computationally intensive, even doing texture RGB -> spectral wavelength conversion for each BSDF / light evaluation.But you do get a fair bit more spectral noise in the image, but this can be handled by handling multiple wavelengths at once with SIMD and importance-sampling the wavelengths to use (Hero wavelength sampling).Where it does actually make a lot of sense over RGB triplets is for volume scattering / absorption (so more accurate hair and skin rendering for example) - calculating the mean free path with wavelengths is much more accurate than using an RGB triplet which is an approximation - however, this means to enjoy this benefit, you need to store the volume coefficients in wavelength bins (instead of RGB values), which uses quite a bit more memory.
 Again, I just pointed it in the previous message. It's attenuation of light. If you have a surface, which absorbs half of the light then you multiply whatever light energy that came in by 0.5 to get the light energy that escapes the surface.>Those bands are arbitrary, defined by human color receptors.Sure. And if we had monochrome vision then we could have used just scalars for light. If you are arguing against 3 component color model then you should not have been baselessly attacking such uncontroversial thing as attenuation. There are engines, which use more points to represent spectra yet they still use multiply.
 This is mistaken. A correct renderer would look correct to whatever creature was looking at it. If we had monochrome vision, we couldn't simply use scalars because it wouldn't look right for those who could see more.And how could it be otherwise? Real life looks correct to any creature that looks at it. And our renderers supposedly model real life.This is a contradiction, and rather than trying to disprove what I'm saying, it's worth chasing down the logical fallacy.
 > A correct renderer would look correct to whatever creature was looking at it.Impossible. RGB is just a 3-bin spectrum. It's the minimum number of bins needed to represent any color humans can perceive because we have 3 color sensors that are at least partly independent.You could always imagine a hypothetical creature with yet more sensors, until you need a bin for every possible wavelength of light. Eventually you'd effectively be simulating individual photons. The world has too many atoms for us to simulate it at that level.We sometimes use larger spectrums than RGB because while humans may not always be able to perceive the difference between two different spectrums of equivalent color, sometimes they interact differently with other things.For example, white light formed by the full spectrum of visible light will seperate into a rainbow when passed through a prism. White light formed by red, green and blue lasers may be indistinguishable when viewed directly, but will only seperate into red, green and blue when passed through a prism.The number of bins we use for that sort of simulation depends on how many we need to get a result that is a satisfactory approximation of reality. All models are wrong. The question is always whether they are good enough for our purposes.
 >This is mistaken. A correct renderer would look correct to whatever creature was looking at it.I am not discussing "correctness" whatever it means, I am arguing against your assertion that "multiplying colors makes no sense". Are we done with that?
 You can come up with whatever mathematical definitions you want. When I say "It makes no sense," I'm speaking very precisely: There is nothing in nature which corresponds to the idea of multiplying two RGB colors together.In that context, it makes no sense to multiply two colors together.
 There is. There is energy attenuation in different frequencies. Look at an object of any color under artificial light then take it outside. Now the light hitting the object's surface has orders of magnitude more energy yet you still see it as same color. Because its surface attenuates the incoming light the same way and your brain detects that attenuation as "color". Speaking of which (the brain), the "color" does not exist in nature at all, it's just an effect of human brain analyzing image projected on the retina. So let's agree that there is nothing in nature which corresponds to the human concept of color.
 > what's special about r,g,bSimply that our monitors reproduce images using red, green and blue light.A PBR engine models the entire process of the light from a scene, falling into a lens, measuring the light that hits the imaginary rectangles that are the pixels of the output image.A PBR engine can model the light internally as a continuous spectrum using Monte Carlo, integrating paths of weighted-random wavelengths at each pixel.You could stop here if you just wanted physical accuracy, you got the spectral data. But if you want to see it, you'll have to get it onto a computer monitor somehow.So, you model a photo-sensitive medium using an exposure function, and apply colour-space transforms to turn the spectrum into just the three RGB wavelengths, in a way that has been measured empirically to be perceptually close to the original spectrum (look up how we got the CIE standards).This is all science so far.If you don't care about the full spectrum but only the RGB (or why not CMYK) result, there are some optimisations and approximations you can do because in most cases you don't need to simulate the full spectrum.Did you ever stop to think that RGB screens might look weird to dogs as well? However, if you somehow built a dog-monitor with only blue and yellow, you could optimize even further. It would be a lot harder to get accurate colour-space data like CIE from dogs, though.If you wanted to simulate a black-and-white photograph, it would be easier again.> and why do we multiply them together?I actually don't think we do, in modern PBR engines.Multiplying two colours together is done in simpler 3D engines because it models reflection; one RGB triplet represents an approximation of the light spectrum, and the other represents the reflective properties of the material. In every-day language we use the word "colour" to mean both. But they are actually of a different "type", if you will. We say "colour" to both the additive type (light) and subtractive type (material).What we model is the transformation of our photon path (additive) by the material of the surface it hits (subtractive). In the simpler model this happens to equate an element-wise multiplication (not "just" a multiplication).However, in modern PBR engines, the reflection operator is a lot more complex than that. The material is no longer simply represented by an RGB triplet, and the operation between the photon packet and the material isn't a multiplication.
 The correct physic interpretation can be found by setting up the model and computing the integrals, it's probably not "neutral" light absorption but a variant of it
 I agree that we should study more in-depth the "physic" adjective in the name of the rendering, but what's the argument exactly against multiplying colors ? Isn't it an equivalent of a light absorption ?
 Ahhhhhhhhhhh - I will be DVd to death for this but:I have a screen (it happens to be about 18" wide in this case). Why on earth is the text in a stupidly thin column in a small font? It looks like TeX's daemonic alter thingie got into the render process somewhere or perhaps someone forgot the other two columns or wanted to torture a webby reader with multiple columns (mmmm ArXiv scrolly uppy n downy pdf) and lost interest.Anyway ... it looks crap in my browser unless I hit CTRL(num)+ a few times - 200% works.
 That is done on purpose. Ideally, one's eye's horizontal angle span will be less than 30 degrees when one is reading something. WPM drops if the eyes or neck have to do too back-and-forth work.The brain doesn't require the eyes to hit every single word to extract the full text thanks to peripheral vision, but this optimization is not as effective when the width of the text-line is too large.I am surprised you didn't mention the FOUC, though. For a while you see raw, minified, un-styled text on this page.
 Would you mind sharing a screenshot somewhere? I'm just using Markdeep's default CSS and the font size seems reasonable on all my screen. I'd be happy to try and tweak it though.
 I'm not sure what has gone wrong (if anything) - I'm a sysadmin not devops/webby minded.This is what I see:https://nextcloud.roseandjon.gerdes.co.uk/nextcloud/index.ph... - that should yield two images.Fonts/typefaces - lovely. Layout - a bit limited.
 There is a more advanced way to share screenshots, just embed them to Excel and share it on sourceforge.
 I did actually chuckle when I read your comment but what is wrong with using the usual tools at your disposal? All I did was slap a couple of piccies on the end of a WebDAV share.
 :))Nothing, other than the format and experience of downloading a document to be opened with a desktop application.There's a lot of one click image sharing services out there. I use Windows key+shift+S to take a screenshot and then go to imgur.com which supports reading images from the clipboard, bam, instant URL for my pic.
 Thank you! This is exactly the same render that I get. I could try and widen the column. I've grown to like the narrower width when reading graphics papers like this but I understand it's not for everybody.
 I am not 100% sure why people are discussing their individual styling preference. Keep what you have. Thank you for sharing.
 Keep it narrow please. It's great.
 It looks good and I had no trouble reading it on my phone. It seems to follow the rule I learned of 7-10 words per line.
 The text seems perfectly sized for reading on my screen. At 200% it looks like one of those clownish Bootstrap homepages and I'd have to scroll every 5 seconds to read it.
 Agreed. Looks fine here at 27" 2160p (no scaling) and on my 13" 900p macbook air

Applications are open for YC Summer 2019

Search: