Hacker News new | past | comments | ask | show | jobs | submit login
Introducing the Distance Field Generator (qt.io)
49 points by jcelerier on Oct 19, 2018 | hide | past | favorite | 14 comments



My personal feeling is that the world is moving on from distance fields, and the state of the art is doing vector rasterization in the GPU. There are a few approaches to these, and currently one of the most promising is pcwalton's Pathfinder. Distance fields require lots of memory, have an expensive preprocessing step (and doing that ahead of time, as in this work, limits flexibility), and don't render very thin fonts well. GPU vector rasterization doesn't suffer from any of these limitations, but does of course require more sophisticated GPU programming.


Indeed, the most recent example I've seen of this is the slug library: http://sluglibrary.com/

Presentation slides: http://terathon.com/i3d2018_lengyel.pdf

A paper: http://jcgt.org/published/0006/02/02/


Impressive! IMO banding + symmetric band optimization is the coolest idea there.


Pathfinder 2 is now mostly a CPU approach with very little done on the GPU, because you really want to avoid late-Z. I don't know of anybody else doing state-of-the-art GPU vector rasterization. Slug has the same pitfalls as Pathfinder 1 (late-Z & overdraw) and also requires an expensive preprocessing step.

I don't think distance fields are the correct approach either, though the work that Adam Simmons is doing to analytically solve beziers on the GPU is interesting: https://twitter.com/adamjsimmons/status/730911372532322304


I actually moved all of that to the CPU (parallel, SIMD) and am using it in Ultralight [1] to generate high-precision SDFs for font-glyphs and small paths. I'll release the GPU-based implementation when I have more time.

If you're interested in the details of the GPU-based implementation, I pre-sort the path data into bins on CPU, encode it to a dynamic texture, traverse the structure in shader using a modified BVH, and calculate the exact distance to individual Beziers using root finding, which I open-sourced here [2]. It's very fast and avoids overdraw through the use of a low-res stencil mask also encoded into texture. There's of course a lot more nuance than that (fill calculation, level-of-detail, etc.) but that's for a long blog post.

- [1] https://ultralig.ht

- [2] https://www.shadertoy.com/view/ltXSDB


Ooh! thanks for the shadertoy link. I'd been struggling with how to do my curved shapes for my evolver.

http://blag.fingswotidun.com/2016/05/evolved-images-using-sh...


Looks neat! Glad to help.


I'm most curious how you're generating a sharp SDF in realtime. Are you just using the pseudo-SDF as recommended by Qin et. al?


Bit hard to explain without examples but while traversing the beziers in shader, I test if we are equidistant to two or more beziers and do special handling to merge the two unclamped fields (normally we clamp 't' between [0, 1] when sampling bezier distance, but for merging two fields we can project it infinitely and then select closest).


> Slug has the same pitfalls as Pathfinder 1 (late-Z & overdraw)

Are you sure about that? I’ve looked at their demo with renderdoc. They’re rendering a small polygon for each character, i.e. they’re rejecting invisible characters, or parts of characters, early before PS. And because they cut corners of each character the overdraw is minimized as well. Symmetric band optimization further reduces time spend computing the PS.


I think there's more to this than just the vector story. Since we're entering a high-DPI world now, the pressure for anti-aliasing and high quality scaling has dropped significantly, a substantial phase shift from the old rules of screen typography where AA was the key feature and all formats needed to optimize for it. As a result vector glyphs are ultimately competing against not just SDF but also crudely downsampled 1-bit textures, and this latter approach, while it's very "brute force", has a relative simplicity of design, and potential for high compression rates.


>> Since we're entering a high-DPI world now...

high-DPI has never impressed me beyond some point. I use a 55 inch 4K TV as my monitor today. I find the huge desktop space far more valuable than having little tiny pixels. So even though it's 4K, the text is rendered at a similar DPI as a 24" 1080 monitor. Besides, what's good at low DPI should only be better at high DPI right?


IMHO Depends on the use case:

- If glyph sizes are known ahead-of-time. Then use character atlas, generated either with CPU things or GPU things. This can be optimized for power consumption.

- Glyph sizes are not known, or text is animated (rotation, etc). In this case Distance Fields are a good trade-off, as their shader is fairly trivial, hence battery consumption should be quite good. On another hand runtime vectorization is viable only if you have a good GPU and connected to the power outlet.


This post is about making a time-memory tradeoff by precomputing some intermediate steps in font rendering.

Whenever considering a time-memory tradeoff, one needs to consider both time and memory.

This post makes absolutely no mention of the additional memory use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: