Many years ago (2011?) when I was working on PDF at Apple, and when Jailbreaking iPhones was a thing, someone posted a PDF on a Website. Just by reading the PDF on your device, it was Jailbroken.
Apparently the attack was done this way: someone modified an open source font library by removing bounds checking from one of its functions. They then waited 12 months to see if anyone had noticed or fixed the change. They then created a PDF with the font in question, including the embedded jailbreak code. The PDF was then released.
That attack has the possibility of being incredibly dangerous.
The fact that they released the updated font to the public means everyone using that font became vulnerable if they had a similar lack of bounds checking inside the pdf reader (which is incredibly likely).
Someone else could make a different pdf with a different payload, so even if that specific person just wanted to jailbreak their own phone, others could do bad things with it.
It's so funny this article features a timeline graphic that's a PNG with a lot of text on it. On Firefox + M1 Macbook, the first image is not up to scale for high resolution screens and each pixel is blurry, rendering the font hardly readable. Then the article continues with some SVG code... oops?
With SVG support at a near-full coverage (and 99.99% for Canva's user base, https://caniuse.com/svg) you do wonder why we see less SVG for graphics like this. It's literally easier to export, and no harder to import than PNGs.
From personal experience and having been burned by odd SVG bugs too many times over the years, a dumb image is simple, it works and you know that it works. I only reach for SVG when I need to do something that other formats cannot.
At one point, SVG couldn't do that. It was a long time ago, but I recall not being able to use SVG to render sideways text because the text tag wasn't supported by browsers.
For IE, we had a PNG font that we displayed slices of, and then we migrated to rotation css as various browsers supported it. (I think IE was the last holdout, but it's been a while.)
From one implementation to another and depending on the way the SVG was encoded, text in an SVG often does not look the same for different viewers. This can seriously mess up the layout of diagrams, etc.
If you're rendering a PNG, you're already doing layout, so you might as well still do it but at least keep the individual glyphs as, well, glyphs (so that they scale to high DPI etc).
My experience as well. Using an SVG as the source of an <img> element seems to have less issues, but that essentially reduces the benefit of an SVG to just a small file size image without scaling issues.
To use an SVG as an actual element, it gets way more complicated than your average Wordpress user is going to want to deal with. It's one of those very interesting formats that can do some very impressive things...that very people want/need/use.
So many bugs. The background will break in dark-mode, or the foreground. The bounding box will cut off the image contents. You export in SVG to get nice text rendering and everything carefully aligned, and it looks in Chromium the way it did in Inkscape... but then in Firefox everything is misaligned, defeating the point compared to if you had just used a PNG. It doesn't work as a social media thumbnail. And so on.
So for a contrasting anecdote, we've used SVGs for years in production on a high traffic site, using an automated Inkscape pipeline to turn text into plain paths, and it's worked perfectly
For diagrams, PNGs just are not a good choice. We did this as a response to mainstreaming high dpi displays back when Retina hit the market.
It's been a big success. The resulting SVGs are significantly smaller in filesize than PNG, and are crisp and readable no matter how the user zooms in.
The path conversion is a single inkscape command line invocation. It's worth doing.
If it comes out of a vector application, you likely don't want to rasterize it for distribution.
On the surface, this doesn't seem quite right. Vector graphics were used ubiquitously in legacy applications where storage, memory, or bandwidth were constrained. Most early graphical computer games were only viable due to vector rendering techniques (Sierra's adventures, for example), and lots of early graphical frontends to network resources were entirely vector-based (NAPLPS, RIPScrip, etc.); Flash was primarily vector-based.
It's the widespread use of raster graphics for everything that's relatively more recent, and this has been enabled by the use of complex compression to achieve reasonable file sizes for large images. Those compression algorithms make a similar tradeoff to vector formats in that they save space at the expense of computation time.
I'm not sure if anyone has done any organized experiments, but it would be interesting to see some stats on render time of SVGs in comparison to decompression time for PNGs or JPEGs. My suspicion is that relative performance is highly context-specific, and that there's no general factor that makes either consistently faster than the other.
However, vector graphics do have the advantage of having the ability to be displayed at different resolutions -- the 'S' in 'SVG' stands for 'scalable' -- whereas raster images need actually to be larger to maintain clarity at higher resolutions, increasing transfer and decompression time.
So as display res increases, the render time for PNGs or JPEGs would be expected to increase much more steeply than for SVGs, meaning that many for many practical use cases, SVG is likely the more efficient option.
i understand your confusion, because there are many different axes of performance in play here
vector graphics will almost always be smaller than raster graphics as long as what you're displaying can be represented in a vector format at all. the only exception is when your vector graphic contains more detail than your display resolution can render. consequently, virtually every graphics application of electronics was done first with vector graphics and only later with raster graphics. the only exception i can think of is television. but radar, cad, guis, fonts, video games, first-person shooters, computerized typesetting, pretty much everything was done first with vector graphics and only later with raster graphics
however, when you're rendering to a raster display, raster graphics are always faster unless a bandwidth constraint bites you. the last step in rendering vector graphics is to copy the rendered image into the framebuffer, which is the only step in rendering raster graphics. (of course, arbitrary decompression can be arbitrarily slow; png is as fast as gif, while jpeg is much slower, and jpeg 2000 is slower still. it's easy for a vector format to be faster than jpeg 2000 and in many cases even jpeg.)
your historical accounts are a bit wrong. naplps and ripscrip never achieved significant adoption because gif was already widespread, and people used plenty of raster graphics in flash
the bigger issue is that when you don't control the renderer, vector graphics are not only slower (which is often no obstacle to interactive use now that our personal computers run ten billion instructions per second instead of one million like when ripscrip was launched) but also more unpredictable in speed. an svg that one renderer handles fine may bog down another one. generally speaking that isn't a problem with raster formats, not for any deep theoretical reason but just because they're simpler
i do agree that vector rendering is the more efficient option for many practical use cases, though
> the last step in rendering vector graphics is to copy the rendered image into the framebuffer, which is the only step in rendering raster graphics
Today it is, but historically that was not the case. Even as late as early 00s, most OSes rendered vector primitives directly into framebuffer, without a compositing stage. That's how e.g. Windows could be so fast on hardware that was slower than today's Raspberry Pi.
Even then, it was the OS rendering the image directly into the framebuffer, while applications that included raster graphics still had to output them via an API exposed by the OS. The only thing that's going directly into the framebuffer is the fully rendered screen, which in a GUI environment usually contains much more than the contents of a single PNG or SVG file.
In the context we're discussing, where the image is an inline PNG on a web page, the browser has to download the image, decompress it, apply relevant transformations defined by CSS or element attributes, render the HTML including the image, then pass the rendered window output to a display API exposed by the OS.
Far from just dumping an uncompressed raster image directly into a framebuffer -- although that sort of thing was definitely common on single-tasking non-GUI platforms in the past.
yes, i agree that the graphics pipeline of current browsers is a lot heavier weight, so file format efficiency is a smaller part of the picture
when rendering directly into the framebuffer was a win depended a lot on the relevant memory bandwidths. vram access being slow is not a new thing this millennium
both microsoft windows and x-windows did support hardware acceleration of drawing operations, and that was crucial for getting good gui responsivity on platforms below about 64 mips. hardware acceleration of drawing operations is kind of like vector graphics file formats but not the same thing unless the file format is wmf, so you had to do things like ripscrip on the cpu. in the 64-512 mips range that is faster than an 80486 but slower than a raspberry pi 1, you can draw a totally responsive megapixel gui entirely in software. the crucial equation is that a 256-color megapixel frame is a megabyte and you need at least 20fps to be usable (50 without double buffering), so you need 20 megabytes a second of bandwidth to the vram
the isa bus gave you 8, so you were stuck with either partial screen updates (only drawing changed regions) or hardware acceleration. other platforms were somewhat better but mostly only a little. hardware acceleration could typically do bitblt but not polygon fill, so vectors lost again. vlb came around in the 80486 timeframe and changed things a lot
the first versions of both microsoft windows and x-windows supported graphics file formats but no compressed formats
> however, when you're rendering to a raster display, raster graphics are always faster unless a bandwidth constraint bites you.
I don't agree with this at all, unless you're hyperfocusing exclusively on the path from VRAM to the display hardware, in which case, you're always outputting to a raster display device no matter what -- physical display hardware that draws vectors natively has been rare, limited to industrial equipment (old-school oscilloscopes, radar monitors, etc.) niche CRT-based arcade games, and the occasional novelty laser display.
> the last step in rendering vector graphics is to copy the rendered image into the framebuffer, which is the only step in rendering raster graphics.
This is where I feel like you're being overly particular in your analysis, because there are almost no mainstream use cases in which uncompressed raster graphics are being stored or transmitted. Certainly the case that sparked this particular subthread -- displaying a line-graphics timeline on a website, and debating whether SVG would have been a better solution than the PNG that was used -- the relevant performance metric is overall time-to display for images of equivalent visual quality.
The comparison here is between the time it takes to transmit, decompress, and display a PNG vs. the time it takes to transmit, render, and display an SVG. The 'display' phase factors out, because once the final image is rendered/decoded, the process to send it to the display hardware is the same. So what matters is the transmission time needed to send the file plus the computational time necessary to decompress (for raster) or render (for vector).
And my point is that both of these scale much more rapidly for raster images than for vector ones. File sizes are larger for higher resolution images, so they take longer to transfer, whereas the same vector file can be rendered at any resolution, so transfer time is constant. Decompression time also scales much more significantly for larger raster images than computation time does for rendering vector images. So at higher resolutions, all things being equal, I expect vector graphics to perform faster more often than raster graphics do.
Obviously, there are lots of other granular variables, so this isn't a deterministic rule. A massively complex SVG, e.g. with tens of thousands of polygons, curves, and fills, will likely be slower to render than a high-resolution PNG of the same will be to decompress.
> an svg that one renderer handles fine may bog down another one. generally speaking that isn't a problem with raster formats, not for any deep theoretical reason but just because they're simpler
In theory, that's true. In practice, there are a small number of renderers in widespread use, all of which have testable performace. In this case, we are talking about web browsers, nearly all of which use one of two renderers.
> generally speaking that isn't a problem with raster formats, not for any deep theoretical reason but just because they're simple
They're simple in the sense that they are ultimately always encoding a grid of pixel values, but they're not necessarily computationally simpler due to the amount of processing necessary to compress/decompress them.
> your historical accounts are a bit wrong. naplps and ripscrip never achieved significant adoption because gif was already widespread, and people used plenty of raster graphics in flash
NAPLPS defined the entirety of the user interface to one of the major pre-internet online services starting in the 1980s (Prodigy), and itself predates GIF by nearly a decade. RIPscrip achieved near universal adoption in the BBS world for a few years prior to the internet taking off. These solutions were the only effective way to create full-screen graphical environments for bandwith-constrained remote applications at the time.
GIF was first developed in 1987, and was initially used primarily for file uploads of images that were inherently raster (e.g. scanned photos, complex artwork, etc.) or for small icons to be uploaded once and cached locally for use in graphical interfaces. And GIF was only viable for these uses because of its compression.
Flash was primarily a vector format (into which compressed raster images could be embedded), and had huge adoption as a vector animation tool on the early web precisely because there was no other way to do vector graphics on the web, and bandwidth was neither fast enough nor codecs efficient enough to be viable for these use cases until very recently, relatively speaking.
i certainly agree that now uncompressed images are little used, but at the time i was talking about, when cpu load of still image encodings was a major concern, compressed image file formats basically did not exist
naplps is 01983, gif 01987, prodigy half a million users more or less, many fewer than fidonet, usenet, or university internet accounts at the same time
anyway, so, a lot of the disconnect in the conversation is that i was talking about the performance characteristics of 40 years ago, while you were talking about the performance characteristics of now. and the cost functions have changed significantly. it's still the case that you can display an uncompressed raster image on a raster device faster than a vector image, at least if it's already in your vram, and the extra cost of rendering vectors on a raster display is why vector images were comparatively little used in the 01980s, when all mainstream use cases of raster images used uncompressed images. but i agree that that's only minimally relevant to whether an svg or a png would be faster for a line-graphics timeline on a website!
with respect to current performance, i still disagree with this:
> Decompression time also scales much more significantly for larger raster images than computation time does for rendering vector images.
for all the compressed raster image formats i'm familiar with, decompression time is fairly precisely linear in the image size, either input or output. vector graphics rendering attempts to reach this ideal, but often fails, because in most vector formats there are interactions between objects that usually have to be taken into account in drawing. so they have to use all kinds of clever algorithms to approach the linear-time ideal which raster compression formats reach almost without effort, and those clever algorithms tend to have high constant factors
considering the 640×480 http://canonical.org/~kragen/sw/dev3/rc.png, which is produced by http://canonical.org/~kragen/sw/dev3/plotrc.py, which can also generate the same plot in eps, pdf, or svg. it ought to be close to a best case for vector rendering, imagemagick on my system takes 21 milliseconds to convert the png to uncompressed binary netpbm format (ppm p6, best of three tries); pngtopnm takes 29ms. generating encapsulated postscript instead and converting it with imagemagick, it takes 465ms. with pdf, 199ms. with svg, imagemagick takes 635ms, but that's obviously because it's badly implemented. (i just don't have a convenient way to benchmark the svg engines used in my browsers.)
apache batik's 'rasterizer' command takes 1294ms, and i thought maybe that was a question of jvm startup overhead, but actually, if i run it with a nonexistent filename as input, it takes only 205ms, so about 1100ms of that is actually processing the svg, so it's actually the svg processing that's taking the time. benchmarking programs in hotspot is riddled with reproducibility problems, though
so in this tiny, badly done benchmark, different vector formats came out as 22 times slower than png (eps), 9.5 times slower (pdf), 30 times slower (svg), and 52 times slower (svg in batik). i suspect that in my browser svg would be only about 4 times slower, which would optimistically mean that for images the size of my entire screen it would actually be faster if they were this simple; but i don't have a good way to prove it
i think what this shows is mostly that vector formats are not simple, not that they're inherently slow. but i don't mean 'simple' in the sense that 'they are ultimately always encoding a grid of pixel values', as you said; i mean 'simple' in the sense that the code required to display vector formats on a raster display takes a lot more effort to write. as a crude measure of this, we can compare the amount of code in the svg and png implementations i have installed here. png is about 340k, if we include zlib, which we probably should:
$ ls -l /lib/x86_64-linux-gnu/libpng16.so.16.39.0 /lib/x86_64-linux-gnu/libz.so.1.2.13
-rw-r--r-- 1 root root 219056 Nov 27 2022 /lib/x86_64-linux-gnu/libpng16.so.16.39.0
-rw-r--r-- 1 root root 121280 Nov 5 2022 /lib/x86_64-linux-gnu/libz.so.1.2.13
qt 5's svg implementation is 360k, but it is linked with, among other things, libharfbuzz (for text layout), libfreetype (for text rendering), libicu72 (i assume for text rendering), libpng, zlib, the zstandard library, and the brotli library
but all of that is just the file format — it doesn't even include the vector rasterization code! that's done by the graphics engine in qt core, which is vaguely similar to cairo or libart in that it implements things like path items, rect items, ellipse items, line items, text items, group items, rotation, shearing, scaling, translation, and a bsp tree index to make the aforementioned interactions between drawn items efficient
the other svg implementation i have installed is librsvg2, which is the implementation used by things like vlc, the gimp, gnome, r, netsurf, links2, and cairo. it's uh
11 megabytes by itself, and it also links in libpng and zlib (and brotli, and liblzma, and harfbuzz, and pango, and freetype), plus cairo to do the actual rasterization, which is another 1.2 megabytes:
$ ls -l /lib/x86_64-linux-gnu/libcairo.so.2.11600.0
-rw-r--r-- 1 root root 1187432 Dec 9 2022 /lib/x86_64-linux-gnu/libcairo.so.2.11600.0
so maybe a good estimate is that png is 30 times simpler than svg and 4 times simpler than basic vector rendering
> NAPLPS defined the entirety of the user interface to one of the major pre-internet online services starting in the 1980s (Prodigy), and itself predates GIF by nearly a decade. RIPscrip achieved near universal adoption in the BBS world for a few years prior to the internet taking off. These solutions were the only effective way to create full-screen graphical environments for bandwith-constrained remote applications at the time.
it's not correct to describe prodigy as a 'pre-internet online service'. when prodigy launched in 01984 and got its first user, the internet had been operating for about 7 years and consisted of about 1000 hosts with somewhere on the order of a hundred thousand users. i don't think prodigy ever, at any point, had more users than the internet; it was under half a million users in 01990 (when the internet reached three hundred thousand hosts, most with many users), and i think it was under a million users even at its peak, when its scumbag employees were claiming prodigy had invented the internet and deleting any user messages that criticized a prodigy advertiser or mentioned another user by name
it's also not correct to say that naplps predated gif by nearly a decade. naplps was defined in 01983, gif in 01987. four years is not 'nearly a decade'
finally, although i'm less certain about this part, i don't think it's correct to say that 'ripscrip achieved near universal adoption in the bbs world', ever. ripscrip didn't even exist until 01992, at which point lots of us even in the usa were still running bbses on things like a commodore 64 (which was still being sold until 01994) or a 286. i used a dozen or so bbses in albuquerque at that time, and none of them supported ripscrip that i can recall at all. my non-biological sister met her boyfriend and later husband on one of the big commercial bbses in town, a chat-oriented thing. ansi art was a huge deal, but ripscrip was very little used. and then the internet went mainstream in the usa in 01994 due to the lifting of the nsfnet aup and the launch of netscape; even windows supported it late in the next year, at which point netscape had already gone public. see https://www.zakon.org/robert/internet/timeline/
try searching online for archives of ripscrip art and ansi art. the amount of ansi art even just from 01995 is orders of magnitude bigger than all the rip art that has ever existed
> GIF was first developed in 1987, and was initially used primarily for file uploads of images that were inherently raster (e.g. scanned photos, complex artwork, etc.) or for small icons to be uploaded once and cached locally for use in graphical interfaces. And GIF was only viable for these uses because of its compression.
this also contains some significant mistakes
as i recall it, gif was initially used primarily for line art, which could indeed be quite complex. myself, i mostly used it for line art and fractals. scanned photos were fairly limited because in 01987 ram was expensive, so most people's framebuffers were pretty small; a cga in graphics mode could only display 4 colors at once, an ega (or cga in text mode) only 16, and a macintosh or hercules or sun bwtwo only 2. you really want at least 256 colors for decent scanned photos, and that's the maximum that gif supported or supports even today. people who were scanning photos up to about 01990 were mostly using high-end graphical workstations and not using gif
gif's compression also doesn't help very much with scanned photos, and it doesn't help at all with 256-color scanned photos. even png is less bad here because, even though the paeth predictor was designed for low-color-depth images, the paeth predictor residuals for color gradients are much lower entropy than the raw pixel data
icons for use in graphical interfaces were generally not stored as gifs, but as uncompressed raster data (.xbm, .ico, macintosh images in the resource fork) and generally were not uploaded and cached locally but rather part of the software that used them. most icons were 16×16, and a 16×16 uncompressed icon in two colors is only 32 bytes, which is smaller than literally any image in gif format
so, i think one of my errors was to not start by acknowledging that this is absolutely correct:
> It's the widespread use of raster graphics for everything that's relatively more recent, and this has been enabled by the use of complex compression to achieve reasonable file sizes for large images. Those compression algorithms make a similar tradeoff to vector formats in that they save space at the expense of computation time.
> I'm not sure if anyone has done any organized experiments, but it would be interesting to see some stats on render time of SVGs in comparison to decompression time for PNGs or JPEGs. My suspicion is that relative performance is highly context-specific, and that there's no general factor that makes either consistently faster than the other.
this is in fact one of the major points coueignoux makes in his 01975 doctoral dissertation, which knuth credited (apparently erroneously) as being the origin of the idea of outline fonts. the letterforms of coueignoux's 'france' system weren't the first vector graphics file format (that would probably be g-code) but they were important pioneers
there are other things i take more issue with. you said:
> On the surface, this doesn't seem quite right. Vector graphics were used ubiquitously in legacy applications where storage, memory, or bandwidth were constrained. Most early graphical computer games were only viable due to vector rendering techniques (Sierra's adventures, for example), and lots of early graphical frontends to network resources were entirely vector-based (NAPLPS, RIPScrip, etc.); Flash was primarily vector-based.
and this is somewhat true. what i was taking issue with was mostly something you didn't actually say and probably don't believe, so it was pretty unfair of me to project it onto you; i thought you were saying that displaying vector graphics on raster displays in the 01970s and 01980s was faster than displaying raster graphics on raster displays. but you said where storage, memory, or bandwidth were constrained, not cpu time. and of course it is absolutely correct that in the period of time when naplps (01983), king's quest ii (01985), and ripscrip (01992) came out, people did often use vector graphics to save storage, memory, or bandwidth. that wasn't the only reason they did it (for example, i used autocad on an ibm pc xt, which used vector graphics because the entire computer with its crude cga screen, second text-only screen, keyboard, and mouse was only a way to coax the pen plotter to plot out a high-quality drawing) but it was a common one
if you had an actual vector output device like the pdp-1's scope, the computer-output-on-microfilm device for which hershey designed his now-ubiquitous fonts, the imlac, the tektronix 4014 serial terminal, a pen plotter, or the evans & sutherland lds-1, all of which are from long before leisure suit larry, using vector graphics could save cpu time too. these definitely were not 'limited to industrial equipment (old-school oscilloscopes, radar monitors, etc.) niche CRT-based arcade games, and the occasional novelty laser display,' but they were mostly expensive and specialized. however, i had a cheap letter-sized hp pen plotter at home when i was a kid, and bought another used one to play around with in the late 01990s
i have a hard time describing those lying scumbags prodigy (01984) or the naplps they used (again, 01983) as 'early graphical frontends to network resources'. people had been providing graphical frontends to computer network resources since nls (01969) if not sage (01958) — generally using vector displays up to the 01970s, at which point there was a major shift toward raster-display systems like the knight tv (early, about 01974) https://gunkies.org/wiki/Knight_TV_system
flash, of course, is from 01996, so it's not an early graphical frontend to networked resources by any stretch of the imagination; it didn't exist until six years after the world-wide web. i agree that being the only way to do vector graphics on the web was a major reason people used it. (the only way except vrml, which nobody supported, and starting in 01998, vml, but only in msie.)
but people used flash for many other reasons as well. text layout in flash is much simpler and more predictable than in html, though, by the same token, supports graceful degradation and responsivity very poorly. if flash was supported at all, you could count on your fonts being supported, because you could embed them in the flash file, which html couldn't do. animation in flash doesn't require programming, so it was much more accessible to nontechnical users. when you were programming, until chrome launched (in 02008), actionscript was much faster than browser javascript. flash could play sounds; browser javascript couldn't (well, there was <bgsound> and <embed>, but those weren't really suitable for game sound effects or synchronizing audio with animation). flash could play video clips; browser javascript couldn't
given these vast differences between flash and the browser environment outside of flash, i don't think it makes sense to reduce them to vector vs. raster. the reasons allyourbase or strong bad email or thousands of terrible brochureware ecommerce sites had to be done in flash instead of html had little or nothing to do with vector vs. raster
Not quite. A type renderer will cache the bitmaps for each individual letter so it only needs to calculate the vector to bitmap activity once for each individual glyph. This will also be the case if your SVG renders text as text (potentially with embedded fonts), but if it instead has that type as outlines then all the outlines need to be rendered and there’s no cache saving for typesetting, e.g., XXX
> A type renderer will cache the bitmaps for each individual letter so it only needs to calculate the vector to bitmap activity once for each individual glyph
you mean subpixel antialiasing. but yes, as it turns out, type renderers do cache subpixel-antialiased pixmaps. the part where this gets tricky is with subpixel positioning of the antialiased letterforms, but you can cache them in that case too if you quantize the positioning, even if you don't quantize it to entire pixels
Perhaps but on my phone I can zoom in and out real fast on a page of text without a single glitch or hiccup and there’s no way a >512px bitmap for every character displayed is used.
If you were to do a slow motion video of what’s happening, what you would see is that it’s the bitmap that’s zoomed and then the zoomed bitmap gets replaced with a rendering of the outlines. It is not re-rendering outlines at every zoom level.
Vector graphics were avoided on the Web because they made pages perform poorly. The advice was widespread, and it was easy to see why if you did encounter an SVG on the Web.
We couldn’t draw with CSS yet, but in the earliest days of that being possible, it was slow, too.
Moore’s law outran those problems, but I suspect our collectively dropping the old “raster = considerate to your user” attitude (including and especially in what we do with CSS these days) is an under-appreciated factor in the astonishingly terrible performance of the modern Web—Javascript gets most of the blame, but I think a lot of it’s giant CSS engines and doing so very much more runtime rendering on the client.
Kinda because it's harder to make a good SVG editor, and also a clean SVG is substantially harder to export and thus SVG is only seen as an authoring format.
i was appalled to discover yesterday that when i did a simple plot in matplotlib the resulting svg was mostly letterforms for deja vu sans expressed as svg paths
Try encoding it the obvious way as text instead and view your SVG file on different renderers, and you will often find that they messed up the fonts/text layout.
This is a defensive practice to ensure that your plot will look the way you intended.
yes, i understand why it's done, and i agree that svg doesn't offer a better alternative. that's because the only fonts svg guarantees the existence of are Serif, Sans-Serif, Monospace, Cursive, and Fantasy, none of which have guaranteed metrics. the reason i was surprised and appalled is that both of adobe's previous standards in the line from which svg descends (postscript and pdf) avoid that problem by guaranteeing the availability of certain core fonts with (in practice, though not in theory) well-defined standard metrics, and it makes svg a much worse standard for small images that include text
(with optipng the png shrinks to 16k but you could definitely optimize the other formats too; matplotlib fills all of them with ridiculous amounts of bloat. but the embedded font is most of it, and it's only necessary in svg)
like, you really have to fuck your vector file format up for it to make a simple line plot format larger in it than as an antialiased png
or for it to make it three times larger than a fairly shitty eps or pdf
i like svg a lot (it's replaced postscript for me as the language i use for easy 2-d vector graphics) but it has some really serious deficiencies. this one is news to me; the other one i've run into is that there's no defined real-world scale, so i can't send an svg to a laser-cutting shop and ask them to cut it out at 1× or 2× scale. i have to use pdf or eps or dxf for that
I took the time many years ago to learn how to effectively draw with bezier curves. It was not time well spent. A good tool in my tool belt for the moment, but it was very much a use it or lose it skill. I tried to do something recently and it was like starting from square one, so I just used raster. Unless someone is doing this stuff professionally everyday, or they are really into it as a hobby, it’s not worth it.
I assume this is simply a software problem that could be solved with a better UI, but who knows if that will ever be solved.
On the other hand the people doing this professionally everyday are producing mostly vector graphics, just in Illustrator rather than Inkscape. They still export as png or jpeg, the vector versions only go to print (in formats like eps or pdf, not svg)
I work with Illustrator on a daily basis. Drawing everything with the pen tool is about as effective as opening up a 300dpi canvas in Photoshop and placing every pixel by hand with the pencil tool. There’s a bunch of tools for creating and editing paths at a higher level than thinking about every control point, just as Photoshop has a bunch of tools for creating and editing huge numbers of pixels in very complex ways.
It is useful for me to know how to think about individual control points, but it is rare for me to need to do this.
Things have probably improved, and/or dedicated vector apps have better options I haven’t invested time it. Learning curves abound…
I did this in Photoshop so I could improve my selections and make them smooth. This was also back around 2006 if I had to guess. At the time, when I looked up what to do, all I found was learning how to use the control points, so that’s what I did.
What would be a keyword to look for to find the more modern way, if I’m not looking to go down a rabbit hole or learning all the possible options to find those bits?
I fell down deep into the rabbit hole of Beziers in the Rhinoceros modelling app. There is something special about clean higher order curvature continuity across shapes. It's simply beautiful.
That said, I have big issues with learning any other tool. So much of the app interface learning was required to be able to freely draw... I really do see, why the bitmap simplicity wins over in "I just need to have it done; needs to be published by x" cases
You need to export all the text as curves to get predictable font and rendering (the fonts used in the image obviously aren’t your regular web-safe fonts), and that’s not gonna be small with a lot of text.
Even when "export to SVG" is available, you often have problems if you can't force it to export (amusingly enough) fonts as vectors instead of referring to fonts by name, etc.
Makes the file larger, but makes it render (mostly) correctly on any system. There can still be discrepancies, however.
Or find a markup-based way to display that info. Text-based images can also often have accessiblity issues. It's best, when possible, to sidestep such images and use markup. Even a SVG afaik, might run into accessibility issue if not marked up correctly.
Thank you for this clarification, I was scratching my head at this, wondering if I had misremembered my entire childhood desktop publishing experience. IIRC the relative openness of TrueType encouraged/forced Adobe to open up their previously proprietary Type 1 fonts.
Looks a little like a redesign of fontforge to not use all these external commands but call libraries would solve a class of problems. Who knows how many there are still lurking. I am lucky I only use(d) fontforge on my local system with fonts I trust.
Maybe we need some kind of "Helvetica Confederation" so sort out these issues. A neutral body that can set standards and act as a safe repository for fonts.
Turing-completeness has nothing to do with safety. XML parsing isn't Turing complete, AFAIK, yet external entity resolution is a problem. Postscript is Turing complete, but I think it can be properly sandboxed, it's a VM with no IO other than "pages" that it can draw on.
Langsec https://langsec.org/ would like to have a word with you about your view that only Turing-completeness is a problem and you can solve all security issues with overpowered config formats by just chucking them in a sandbox.
That's not what GP is saying? The point, as I understand it, is that pretty much all formats that can't be processed in a single trivial pass have to at least be sandboxed w.r.t. their time and memory usage. So just because a format has more surface-level power doesn't necessarily have to do with how prone a processor is to security issues.
Indeed, my takeaway from your LangSec link is that formats shouldn't have complex grammars that leave holes open in parsers, not that formats can't represent powerful semantics. If you reach an exploitable hole in the parser, then you've likely already lost, short of the parser itself being sandboxed. Meanwhile, a TM bounded in time and space is just a finite state machine, not unlike all the other state machines in a typical processor.
Perhaps it would be useful to describe a program type that only does one pass through its source. No loops or function calls. It may be useful to describe a single block of reusable sections, which cannot refer to itself in whole or part, to reduce program redundancy. Or rely on compression algorithms to remove the need for even that. The one pass part would be something like a shader language.
It's True! Type has become easier with a Computer - Modern systems can do amazing things compared to the Typewriter of the past, but we need to get with the Times and work for a Bold Futura.
These vulnerabilities don't seem to be related to Helvitica. I was expecting a critique related to fonts themselves and not bugs with general font rendering.
No, but for designers designing with an unknown/potentially changing font is more work and produces less good looking results. So unless there is a set of reliable cross-platform system fonts there is zero incentive for designers to use system fonts for commercial projects.
If you have ever worked in design the absolute horror is if you designed something well and then the boss of the org tou did it for opens it on their IE6 and it looks like shit. This is a similar problem.
Is it truly designed well if you didn't find out the target audience and the tools they would be using to view that design?
I love good design. I really do. And it truly is a critical piece of good software. Yet the most painful experiences I've had in my careers are from working with designers who who push their idealized design philosophies onto a reality where they aren't the right answer. Designers need to be as focused as anyone else on understanding the customers and solving the customer's problems.
I agree with you. There is a lot of bad design and there are a lot of bad designers. Privately I tend to call design that doesn't conaider function styling and people who make these decisions are stylists.
That being said designing for screens is already hard because the sizes and proportions of the screens and windows can vary wildly and even change during use. Adding potentially different fonts with wildly different widths and readability into the mix is certainly something that doesn't make things easier.
Again, that doesn't mean using a system font is something I'd never do, it just means that whether that is a smart or a bad choice depends on all kind of factors.
commonly the web browser passes the font data it downloads to the operating system for rendering, and there have definitely been exploitable holes in downloadable font handling in the past, in large part because the font rendering engines were not written with malicious data in mind (because you were buying your fonts from bitstream or apple or mergenthaler or whatever)
in addition to the omission of intellifont which mnw21cam's comment points out, this is missing most of the historical development of computerized fonts
it's also missing hershey fonts, which are public-domain vector stroke fonts from 01967, and knuth's computer modern fonts, which are bézier outline fonts from 01979 with the final version published in i think 01983. one origin of the concept of outline fonts seems to be from p. j. m. coueignoux's 01973 master's thesis at mit, which he elaborated on in his doctoral dissertation in 01975 (including many examples of fonts he'd represented with bézier splines). even earlier, urw founder karow's outline font design system 'ikarus' was in use starting in 01972, and a very substantial fraction of the fonts people use every day were originally outlined in it. from 01974 the fred system on the alto at xerox parc was also being actively used to design outline fonts, though knuth doesn't seem to have known about the parc work or karow's work. a couple of people who worked on the parc project went on to found adobe
coueignoux's dissertation cited out mergler and vargo's 01968 program for making parametric outline fonts, called 'itsylf', but it was not able to produce a complete font (just 24 letters), and its objective was just plotting out the characters on a plotter for later use with more traditional typefounding approaches, not computerized typesetting. even coueignoux doesn't seem to have attempted computerized typesetting; the letters in the specimen sheets in his dissertation are not even aligned on a common baseline. hershey did write a full typesetting system, although unlike his fonts it has fallen entirely out of use, and of course the typesetting system knuth wrote at the time remains in wide use to this day, often using the outline fonts he designed at the time
so 'adobe introduces the concept of outline fonts' in 01984 is a bald-faced lie, and the authors should be ashamed of themselves for publishing such ignorant tripe, which is contradicted by the very page they link to on the history of computerized typography
One of the reasons I use xterm is it's beautiful default bitmap fonts, every character distinct and easy to understand, no matter if it's 1 i l | or I, no matter if it's 0 O o or Ø.
The extra flexibility of is not worth the compromise of clarity to me.
I use the different sizes too:
"unreadable" when I need to copy-paste huge amounts of text
"default" for most stuff
and "huge" when I need a bit more focus.
This is also the reason I prefer 1920x1200 px display rather than higher resolutions at the same size. Higher DPIs makes the characters too small for my old eyes, and switching to a vector font only to try and reproduce the shapes I already know seem idiotic.
Fonts are actually really complicated. To a degree that is hard to understand unless you have had to deal with the intricacies of font files, or rendering text yourself, not using a library that abstracts away all the gritty details.
It doesn't help that TTF and OTF are decades old formats that predate Unicode code points having more than 16 bits, or needing color for emoji glyphs.
The only reason that they can't keep it as simple as font-family: system-ui
is probably because Canva is a tool for the larger crowd of designers. (I mean, I am not gonna start digging into what type of designer, that's a diff story)
and designers LOVE goofin' around with fonts, right?
Why can't programmers keep it as simple as dependencies: use system libraries?
The reason is control. If you want your stuff to align neatly and look good in the details you need to define a font. And there is no single system font you can truly rely on, so you just bring your own.
I'd be very happy if OS creators could agree on one set of universally supported system fonts, preferably open source, that work on all systems — until then your two options are:
- not caring about good design, giving up control and using whatever (sometimes this is a feasible option)
I used to think fonts didn't matter. This changed when I had a takehome with a simple UI, some rows of text with alignment and some borders. The entire thing looked off until I setup the font that was in the spec. The difference with the "correct" font was so big it was hard to believe.
I still think most of the time sticking to system is good enough. But I can see the custom font point of view if you strive for something more.
I’m a big fan of typography, but I also strongly feel that applications (both native and web) should use system fonts and obey any user preferences for weight and size.
It’s a question of both accessibility and UX productivity. Switching to a different application shouldn’t be a jarring context switch. The user should be in control of their desktop environment, not the branding department at company X who insists that everything they release should be using font Y this year and then font Z the next. These changes bring no user benefits.
It’s different for software and content that actually tells a story. Games and web sites have a lot more freedom for custom design, and typeface choices are an elemental part of that.
I just wish more designers understood the distinction between telling a story and enabling people to be productive.
I create a lot of webapps. The last couple of years I started using only system-ui as font @ 100% size.
Nobody ever commented about this because it just looks good.
When you are building a web experience I can understand the need for other fonts (although I block downloading external fonts). But I totally agree with you when it comes to (web) apps.
> The last couple of years I started using only system-ui as font @ 100% size.
Question because I'm a frontend noob. How do you achieve "100% size"? I've just been using font-size: 16px on the body element (which seems wrong), but I don't know how to word a search query for this.
Very easy: don't set a size and use em units in your CSS.
font-size: 1em;
Imho it's also much better to use em units for margins and paddings because this will make everything look like it belongs together, even for users that use a very large font size.
> Vignelli designed the iconic signage for the New York City Subway system during this period, and the 1970s–80s map of the system.
If you're Vignelli, you can tell the MTA which font to pick. If you're not him (or someone of equally high standing), you will most likely have to work with a style guide someone else thought out, and use whatever font(s) the style guide says you should use...
Plus, that was back in the seventies. Nowadays, every self-respecting company wants to have their own font (even if it's indistinguishable from Hel(l)vetica/Arial/whatever for >99% of people) - https://www.hvdfonts.com/custom-cases
For the uninformed like me, https://en.wikipedia.org/wiki/Massimo_Vignelli says "Vignelli's designs were famous for following a minimal aesthetic and a narrow range of typefaces that Vignelli considered to be perfect in their genre, including Akzidenz-Grotesk, Bodoni, Helvetica, Garamond No. 3 and Century Expanded. ... "Out of thousands of typefaces, all we need are a few basic ones, and trash the rest.""
I'm not a font expert, but I do appreciate good typeface. I view typefaces like shoes - different types for different uses. I have my running shoes, my hiking boots, my beach shoes, my dress shoes, etc...
I have my mono-spaced typefaces, my variable-width typefaces, my serif, and my sans serif.
If not just one, a small handful of maybe 3-5 fonts. The idea that someone needs to scroll through a list of hundreds of fonts every time they have a new project, and pick multiple fonts for every project, seems to mostly be a form of procrastination… or leaning on a font to cover up a weak design.
This strikes me as similar to the Web Colors that used to be important. They had a selection of a couple dozen colors that were safe to use and would display correctly on basically the entire gamut of systems, browsers and color depths of the time in the 90s, and developers were... highly encouraged... to pick from them.
Apparently the attack was done this way: someone modified an open source font library by removing bounds checking from one of its functions. They then waited 12 months to see if anyone had noticed or fixed the change. They then created a PDF with the font in question, including the embedded jailbreak code. The PDF was then released.