It would be great if we could also go ahead and fix subpixel anti-aliasing for OLED screens. People have been been trying for years to get Microsoft's attention about this issue. [1]
The subpixel layout of OLED screens is different than the the traditional layout, so text ends up looking pretty bad. Patching ClearType would be the first step to fixing this issue. I'm surprised that none of the display manufacturers have tried twisting Microsoft's arm to fix this issue. At the present moment OLED screens are the superior display technology, but cannot be used for productivity because of this issue.
> The subpixel layout of OLED screens is different than the the traditional layout, so text ends up looking pretty bad. Patching ClearType would be the first step to fixing this issue.
Patching ClearType is unfortunately not as straightforward as it should have been. In an ideal world, you just change the sampling kernel your rasterizer uses to match the subpixel layout (with perceptual corrections) and you’re done. In our world, it takes hackery of Lovecraftian levels of horrifying to display crisp text using a vector font on a monitor with a resolution so pitiful a typographer from centuries ago would have been embarrassed to touch it. Unfortunately, that ( < 100 dpi when 300 dpi is considered barely acceptable for a print magazine) is the only thing that was available on personal computers for decades. And if you try avoid hacks, you get more or less Adobe Reader’s famously “blurry” text.
One of the parts of that hackery is distorting outlines via hinting. That distortion is conventionally hand-tuned by font designers on the kind of display they envision their users having, so in a homogeneous landscape it ends up tied to the specifics of both ClearType’s subpixel grid (that has been fixed since 2001) and Microsoft’s rasterizer (which is even older). Your sampling kernel is now part of your compatibility promise.
The Raster Tragedy website[1] goes into much more detail with much more authority than I ever could lay claim to, except it primarily views the aforementioned hackery as a heroic technical achievement whereas I am more concerned with how it has propagated the misery of 96 dpi and sustained inadequate displays for so long we’re still struggling to be rid of said displays and still dealing with the sequelae of said misery.
> Unfortunately, that ( < 100 dpi when 300 dpi is considered barely acceptable for a print magazine)
I find this fascinating, because I recall school textbooks having visible dots, but I'm yet to experience what people refer to as "oh my god I'm seeing the pixel!".
It further doesn't help that when seated at a typical distance (30° hfov) from a ~23" 16:9 FHD display (96 ppi), you get a match (60 ppd) for the visual acuity you're measured for when an optometrist tells you that you have a 20/20 eyesight.
It's been of course demonstrated that eyesight better than 20/20 is most certainly real, that the density of the cones in one's eye also indicates a much finer top resolution, etc., but characterizing 96 ppi as so utterly inadequate will never not strike me as quite the overstatement.
As far as I know, visible dots in color printing are usually due to limitations on the set of available colors and limited precision with which the separate stages that deposit those colors on the sheet can be aligned with each other, not due to the inherent limitations on the precision of each of those color layer. You get dots in photos, but not jagged lines in your letters. And 300 dpi is what your local DTP / page layout person will grumpily demand from you as the bare minimum acceptable input for that dithering, not what the reader will ultimately see. One way or another, 96dpi pixelated (or blurry) text in e.g. an illustration in a printed manual is really noticeable, and miserable to read in large stretches.
A more precise statement is perhaps that 96dpi is much too low to use plain sampling theory on the original outlines to rasterize fonts. It does work. The results are readable. But the users will complain that the text is blurry, because while there’s enough pixels to convey the information on which letter they are looking at, there are not enough of them to make focusing on them comfortable and to reproduce the sharp edges that people are accustomed to. (IIRC the human visual system has literal edge detection machinery alongside everything else.)
And thus we have people demanding crisp text when the requisite crispness is literally beyond the (Nyquist) limit of the display system as far as reproducing arbitrary outlines. Sampling theory is still not wrong—we can’t do better than what it gives us. So instead we sacrifice the outlines and turn the font into something that is just barely enough like the original to pass surface muster. And in the process we acquire a heavy dependency on every detail of the font display pipeline, including which specific grid of samples it uses. That’s manual hinting.
Seriously, leaf through the Raster Tragedy (linked in GP) to see what the outlines look like by the time they reach the rasterizer. Or for a shorter read, check out the AGG documentation[1] to see what lengths (and what ugliness) Microsoft Word circa 2007 had to resort to to recover WYSIWYG from the lies the Windows font renderer fed it about font metrics.
As for seeing pixels—I don’t actually see pixels, but what I do start seeing on lower-res displays after working on a good high-res one for a week or so is... lines between the pixels, I guess? I start having the impression of looking at the image through a kind of gray grille. And getting acclimatized to (only!) seeing high-res displays (Apple’s original definition of Retina is a good reference) for several days really is necessary if you want to experience the difference for yourself.
> Patching ClearType is unfortunately not as straightforward as it should have been. In an ideal world, you just change the sampling kernel your rasterizer uses to match the subpixel layout (with perceptual corrections) and you’re done. In our world, it takes hackery of Lovecraftian levels of horrifying to display crisp text using a vector font
Can you explain why that is? Is it a bed Microsoft made or something more intrinsic to font rendering generally?
> whereas I am more concerned with how it has propagated the misery of 96 dpi and sustained inadequate displays for so long we’re still struggling to be rid of said displays and still dealing with the sequelae of said misery.
Well, Apple found a solution that works with web and print - and the font( file)s are the same. What's the secret stopping Microsoft from going the Apple route, other than maybe backwards compatibility?
> What's the secret stopping Microsoft from going the Apple route, other than maybe backwards compatibility?
Better monitors (what they call Retina). Older pre-Retina screens render them faithfully, which means that the text is (a bit) blurry. This is the poison that needs to be chosen (there is no way around this), Microsoft prioritized legibility while Apple prioritized faithfulness.
Raster Tragedy should have been called the Raster Disaster. Mostly because I keep calling it that and looking for the wrong thing every single time I want to link to it.
They should just ditch ClearType and use grayscale AA like Acrobat used to have. PPI is high enough on modern displays that the reduction in resolution won't matter.
> PPI is high enough on modern displays that the reduction in resolution won't matter.
Have you looked at the desktop monitor market recently? There are still a lot of models that are not substantially higher PPI than what was normal 20 years ago. PCPartPicker currently shows 1333 monitors in-stock across the stores it tracks. Of those, only 216 have a vertical resolution of at least 2160 pixels (the height of a 4k display). Zero of those 4k monitors are smaller than 27", so none of them are getting close to 200 PPI.
On the low-PPI side of things, there are 255 models with a resolution of 2560x1440 and a diagonal size of at least 27". One standard size and resolution combination that was common over a decade ago still outnumbers the entirety of the high-PPI market segment.
If you look at the Steam Hardware Survey results, their statistics indicate an even worse situation, with over half of gaming users still stuck at 1920x1080.
If subpixel antialiasing made sense during the first decade after LCDs replaced CRT, then it still matters today.
Asides from Steam, consider one of the biggest markets for displaying text on the Windows OS - low-end office PCs. My entire company runs on Dell’s cheapest 24” 1080p monitors. I don’t expect that will change until Dell stops selling 1080p monitors.
> If you look at the Steam Hardware Survey results, their statistics indicate an even worse situation, with over half of gaming users still stuck at 1920x1080.
Are these the natural resolution of the monitor or just what people play games at? I suspect the latter because the most popular gards are more mid level / entry level cards. The 1650 is still at #4.
The Steam Hardware Survey samples the system when Steam is launched, not while a game is playing. For most users, Steam starts when they log in to the computer. I think the unfortunate reality is that a very large number of gamers are still using 1920x1080 as their everyday ordinary screen resolution for their primary display, though a few percent at least are probably on laptops small enough that 1920x1080 is somewhat reasonable.
Not all gamers have a computer entirely dedicated to that purpose. Even among those that do, it's not uncommon to also play games or run Steam on another machine.
I still have Steam installed on the laptop that was long ago replaced as my gaming computer but which is occasionally used for other purposes, because I have no particular reason to remove it.
It's actually probably reporting the software-configured resolution, not the hardware capability. The important distinction is whether it's a system-wide resolution setting or a game-specific setting that may not apply to browser contexts (except for the ones used by Steam itself).
What makes you think that it’s more likely reporting a software-configured resolution?
It is after all a hardware survey, and focused on what user hardware supports.
It's vastly simpler, and more useful, for Steam to detect the current resolution. Trying to detect the maximum supported resolution is non-trivial, especially when there are devices that will accept a 4k signal despite having fewer pixels.
Plenty of gaming monitors are native 1080p. Compared to a higher-res normal monitor at the same price, you usually get a higher refresh rate and better pixel response times. Or you used to, anyway—looks like that part of the spec sheet has been effectively exhausted in the recent couple of years, and manufacturers looking to sell something as “gaming gear” are slowly moving on to other parts of it. As long as they’re raising the baseline for all of us, I’ve no beef with them.
They existed for a while, and as recently as a year or two ago there was a cheap LG 24" 4k that was only about $300. But I think the monitor market in general moved on to focus more on larger sizes, and "4k" became the new "HD" buzzword that meant most products weren't even going to try to go beyond that. So basically only Apple cared enough to go all the way to 5k for their 27" displays, and once everyone else was doing 4k 27" displays a 4k 24" display looked to the uninformed consumer like a strictly worse display.
The linked issue points out that grayscale AA has color fringing on some of the subpixel layouts. It's not obvious to me how one would fix it though, it seems like a deficiency built-in to panels with weird subpixel layouts and the subpixel layouts are a compromise chosen to achieve (fake?) higher PPI
I have a 1440p 27" monitor. On my monitor, ClearType vs greyscale AA is the difference between acceptable text and stabbing your eyes out with a rusty spoon.
From what I can gather, 4k at 32", which is the typical size you get 4k panels at, is just 30% more pixel-dense.
I have strong doubts just 30% more density will somehow magically make grayscale AA acceptable.
If you know any good 27" 4k mixed-use (ie >= 144Hz, HDR) monitors I'm all ears.
The built-in ClearType Text Tuner [^1]. Fixed most of my issues with OLED rendering on Windows. However, some applications do their own subpixel-AA. Even on MacOS, where they supposedly removed all subpixel-AA, I still see the occasional shimmering edge.
> OLED screens are the superior display technology
With all the hassle to apparently keep my OLED from
burning in [^2], I'd disagree. Apples Displays acheive the same contrast levels with backlight dimming. The only cost is a slight halo around tiny bright spots. It's only really noticeable when your white cursor is on a pitch-black screen.
[^2]: The edges of the picture are cut off because of pixel orbiting. I have to take a 5-min break every 4 hours for pixel refresh. I have to hide the menubar and other permanently visible UI-elements.
It's not that easy. With the stripe layouts, all you have to do is increase the horizontal or vertical resolution when rasterizing, then map that to subpixels. There's no current methodology or algorithms to deal with triangular layouts, etc. And OLED's subpixel layouts have been moving around yearly with both LG and Samsung. Those two even have RGB stripe layouts forecast for the future.
Also, this isn't true? The blur busters founder (Mark Rejhon) has worked a lot on this exact issue and already has defined shaders and approaches to arbitrary subpixel geometry text shaders in the PowerToys repos (no thanks to Microsoft).
His approach is based on the Freestyle HarmonyLCD subpixel rendering approach which has supported non-striped layouts for over 6 years.
We're currently blocked by Microsoft, who continue to ignore everyone on this issue despite Mark's best efforts. Core Windows shaders need to be modified and he can't really proceed without cooperation, without injecting a security risk for anyone who uses his solution.
LG was RWBG, but newer panels use RGWB, which works better with subpixel.
I wasn't aware of the FreeType harmony approach, but it looks like there's some problems, like no FIR filtering. The rapidly changing subpixel arrangements would also be difficult to accommodate. They'd have to have a new DDC command or something to poll the panel's subpixel matrix. I imagine by the time they got that through the standards bodies that RGB OLED would be ready.
Once upon a time I owned a LCD monitor with diagonal subpixels[1], and subpixel antialiasing absolutely didn’t work on that either. It’s just that it was very niche and I’m not sure if there were even any follow-up products that used the same arrangement.
> There's no current methodology or algorithms to deal with triangular layouts, etc.
I believe there are rasterization algorithms that can sample the ideal infinite-resolution picture according to any sampling kernel (i.e. shape or distribution of light) you desire. They may not be cheap, but then computer graphics is to a great extent the discipline of finding acceptable levels of cheating in situations like this. So this is definitely solvable. Incompatibility with manual hinting tuned to one specific sampling grid and rasterization algorithm is the greater problem.
I don't think it's a big issue on Microsoft's radar because a lot of OLED screens have a high enough pixel density that the problem isn't really noticeable. The subpixel arrangements themselves have also improved in recent years, further mitigating the issue.
I have a 27" 1440p 3rd gen QD-OLED panel and while I can make out some fringing if I pay real close attention to black-on-white text, it's not noticeable in general usage. The 4k panels have such a high DPI that I can't see the fringing at all without a magnifying glass.
If you know that your monitor is a 3rd-generation QD-OLED panel, then you probably know that text rendering was the main complaint about earlier generations of QD-OLED, and there are probably still more of those in the wild than ones as recent as yours.
This is also a very valid point. The improvement in text clarity on this panel was one of the key reasons why I decided to pull the trigger on this $1,000 monitor, while I had passed on previous models.
On OLEDs, high levels of ambient light hitting the monitor tends to wash out blacks, making them appear dark gray, thereby subverting one of the most clear-cut advantages of OLED.
I don't see this on any of my phones or wearables. I'm aware QD-OLED in particular has this weakness, but haven't heard of or experienced any other OLEDs having this issue.
I wonder if it's actually due to the quantum dots, or if it's more broadly a thing for large OLED panels. I haven't spent any significant time using an OLED TV, and I think most of the OLED computer monitors I've seen in person were QD-OLED.
The story I'm aware of is that in the QD-OLED display stack it was not possible to put a polarizer layer in, which is what causes its telltale weakness in ambient light rejection.
So you'd also not see this on other types of displays with a quantum enhancement film (i.e. FALD MiniLED + quantum dots), it's specifically QD-OLED that has this weakness.
This is to the extent that if the self-emissive quantum dot demo from this year's CES was real, even that won't have this issue (although it will likely still do have the stupid triangular subpixel geometry like on QD-OLED, as the demo unit also had that).
I don't know the mechanism behind it, but I've done side by side comparisons and it's clear to the untrained eye that the impressive contrast ratio of OLEDs is easily weakened by ambient lighting conditions.
My overriding feeling after reading this is: displaying text is a key job of a web browser. A browser that showed no text would be useless. And yet, after learning that Chromium's text looked wrong on Windows (by far the most important platform for Edge and Chrome), it took 4 years for the Chromium and/or Edge teams to fix it.
4 years of user research?
3 years to respect the user's ClearType Tuner values?
Being a regression from pre-Chromium Edge, this should have been a release blocker on Chromium-based Edge. Instead, text looked bad for 4 years.
> Being a regression from pre-Chromium Edge, this should have been a release blocker on Chromium-based Edge. Instead, text looked bad for 4 years.
Text didn't look bad. It just didn't look identical to the rest of the OS.
It's not obvious why that should be a blocker at all, rather than a low-priority inconsistency.
And for people who switch between devices all day long but use the same browser, you could even argue that it's more important for text rendering to be consistent in a browser across devices, rather than consistent within a device. I don't personally think that, but I can see why there might discussion over whether this is even an issue at all, much less a blocking one.
Furthermore it's not just that it didn't look identical to the rest of the OS. There is in fact no culture for Windows apps to conform to the platform specific look and feel. A long time ago people were discussing inconsistent ClearType use in Microsoft's own apps like Office and Internet Explorer: it turned out that both Office and IE had their own settings for enabling or ignoring ClearType. In my opinion, that choice should belong to the OS and should never be given to app developers.
Yeah as a person who only uses Microsoft Windows when absolutely necessary, who consequently has not been conditioned to overlook its many quirks, I find it hilarious that someone would judge an app for not being consistent with the look and feel. To see how unrealistic this complaint is, all you have to do is launch the Control Panel, then launch the Device Manager, and compare them.
Not sure why, this is like fallacies 101. Just because many other surrounding applications are inconsistent in their presentation, that doesn't mean this situation should be worsened.
To an unfortunately large degree, browsers and browser engines are simply not concerned with respecting the host platform's defaults and conventions. The browsers are here to replace (and reinvent) the host platforms, not integrate with them. Developers writing apps to run in a browser engine are more likely to care about their app looking the same across all devices and operating systems, even if their users may care more about the app following the same conventions as other apps on their platform of choice.
Edge getting this wrong is embarrassing for Microsoft, but is not at all surprising when you take into account how notoriously fractious Microsoft is and how unlikely it is that anyone in Microsoft could enforce a cohesive vision for UI standards to the extent of being able to make this a release blocker for Edge.
That's actually pretty fast. It took 10 years for Java[1] and 5 years for JavaFX[2] to fix some trivial but rather severe text rendering bugs. Most of that time was getting enough courage to fix them myself. :-)
Reporting text rendering bugs is frustratingly difficult!
Maybe I just have low standards but to me text looked absolutely fine while it "looked bad". I have to try very hard to see any difference at all. I don't think it's shameful that it looked this way for 4 years, I think it's shameful that such a degree of wasted effort went into "fixing" it. What a ludicrous use of manpower.
This is the expected downside of them abandoning Edge's rendering engine and switching to a new one. A lot of things will look slightly different. They accepted that premise from the start.
It seems unlikely someone is going to block a release because an expected thing happened.
And that’s why edge doesn’t publish their issue tracker. Ive reported bugs in user profile data and Wallet that have been ignored.Edge Changelog just says “fixes” but I haven’t observed anything noticeable until this rendering fix .
Chrome's font rendering issue on Windows has been known for more than a decade. Turns out all you need to do is reading proper gamma/contrast values from register.
But they didn't fix it in Skia, so most Skia based projects still have shitty font rendering on Windows.
Yes you're correct. My memory is a bit fuzzy but it the fact is in the article
> While Skia uses DirectWrite on Windows for certain functionality such as font lookup, the final text rasterization is actually handled directly by Skia. And one major factor in the "washed out" feedback from users is the internal contrast and gamma settings for text rendering.
> Two main differences in text contrast and gamma values were uncovered between Edge's Chromium-based engine and its prior engine. First, Skia does not pick up text contrast and gamma values from the Windows ClearType Tuner. Secondly, it uses different default values for text contrast and gamma than those used by Edge's DirectWrite-based text stack.
I was debugging blurry text in windows at a previous job where we used electron to develop a softphone application and could not understand why the lighter text was harder to read on windows. That would settle the debate.
I hope they will come up with better integration with windows so these differences will disappear.
Visible on a phone, though the why is not clearly seen because that's not how you compare such poor quality pics - you need to flip back & forth, preferably zoomed (a common fail of most of these kind of blogs is not to do that). Then you can see the deficiencies on the left and the reason why it's so thin - in letters like 'i' the vertical bar is semi-transparent, not solid
Embedding that image in a blog where it has to be resized down is silly since it makes the change impossible to see. At full size I can easily see the difference between the two though.
To me the one on the right looks slightly darker/bolder, but not qualitatively better. The stems on the letter "m" in "<meta" in lines 2 and 4 are badly hinted (smeared/mushy looking), and drawn differently even though they're in the same horizontal position in both lines.
Whoah, for me the left looks unreadably blurry... but it sounds like that's only me? If other people saw it I have a hard time believing they wouldn't mention such a significant difference and just talk about "I stems" instead.
It's a very subtle difference but noticeable in the anti-aliasing.
I don't have Windows right now, so I haven't tested if the change's closer to Firefox - but Firefox always had some heavy antialiasing on Windows, which I wasn't a fan of.
Funny, yesterday I noticed this and thought "huh, it's like the text got a bit sharper" and then immediately dismissed it as something I must have made up...
It's worse. Windows aggressively alters the shape and overall character of fonts to perform anti-aliasing, making them appear sharper on low-DPI displays.
Linux is a goofy with this too, but still looks better than Windows. macOS handles it best on high-DPI displays, but because subpixel anti-aliasing is no longer used, fonts can be a little blurry on low-DPI displays.
It is a matter of personal preference. I personally love the look when there is no hinting at all, even though this means blurriness at the pixel level. Windows prioritizes crispness. I hate that and I think it looks worse.
On the other hand, back in Mac OS X Leopard, the OS lets you adjust the level of font smoothing. I preferred the strongest level of smoothing. Many designers hate that because it is akin to faux bold. I didn't like that Apple removed it in Snow Leopard. These days it hardly matters though.
It's partly a matter of taste, I know people who prefer it. But historically Apple has paid the most attention to high-quality text rendering and Windows' approach has been... utilitarian, with stuff like ClearType mostly optimized for 'crispness' and detail at the expense of actually looking good consistently, IMO.
> One piece of feedback was significant—many Edge users shared that text appeared "washed out" and that it didn't look consistent with text in other parts of Windows.
> Text looked washed out on Chrome on Windows pre-132.
> The team took this feedback seriously and did some investigation.
Wait, does "the team" have no eyes/visual tests of their own? How can you seriously make a claim about them taking it seriously when they failed to notice this degradation in the first place? Or, for that matter, that it took so long to fix
> It was evident that the text contrast value needed to increase, but data was needed to determine how much to adjust it.
No, you adjust first until it's not "evident" anymore and then waste years on consumer research
One of those things that to me is frustrating is how there is no clear unambiguous definition of what is correct in font rendering. Presumably typeface designers have some specific intent on how heavy strokes and stems are, but then that somehow becomes ambiguous when a font gets rendered on screen.
Asking for user feedback feels kinda pointless in that context, does your average user know what some random font should look like? Better would be to ask type designers for feedback on rendering their own typefaces
There is actually a definition of what is correct, and I'm pretty sure Adobe Photoshop does it in one of its modes.
It's just pure grayscale antialiasing of the underlying letterform, with the correct gamma used by the monitor. Or subpixel rendering if you want, as long as it matches your screen's actual LCD layout.
The issue is that that's not necessarily what's most readable. So various forms of hinting and darkening can be introduced to improve readability, and all of that is of course entirely subjective.
Vastly oversimplified, that'd be because fonts are individual strokes of varying width and free layout that get rendered on a array of pixels with fixed sizes and fixed layout.
Resampling comes with some aliasing/information loss. Font rendering is about picking which losses you're willing to take.
And you can't just "ask designers" because the trade-offs are different based on resolution and pixel layout. And they change as you scale the font up and down. ("Font hinting" tries to fix issues with that scaling. It's got its own downsides)
And because it's fun, there's also a perception component, where different people can just look at the same font and process it differently.
Font rendering is the art of making compromises that offend the least amount of people. A good starting point on the subject: "Text rendering hates you" - https://faultlore.com/blah/text-hates-you/
(I could swear there was also a great post by Raph Levien, but I can't find it)
Why does Chromium need to manually set default contrast/gamma values instead of just reading Windows' default ClearType values? The article mentions they added support for reading ClearType Tuner values last year, so why aren't the Windows defaults available through the same API? Seems unnecessarily complex to maintain separate defaults.
Developer of this feature and author of the blog post here. Good questions and I can elaborate. As you mentioned, Chromium does read ClearType Tuner values and passes them on to Skia them if they're set (I implemented this behavior last year).
However, most users don't run the ClearType Tuner. This announcement is basically that we are now using values that match the default ClearType contrast and gamma. There's no extra complexity or separate defaults. This behavior shipped later because it has a much larger impact on the user base.
This wasn't happening automatically because Chromium doesn't user DirectWrite for text rasterization, so they were missing default Windows behaviors like the ClearType Tuner integration and Windows default values.
Windows font rendering, especially for CJK characters, remains problematic even with the transition from Skia to DirectWrite. Fine-tuning the ClearType tuner doesn't achieve the same text weight as MacType; the characters still appear too thin. Moreover, Microsoft's choice of UI fonts is also poor.
Would the same teams at Google be interested in implementing better line breaking? The Japanese text in the screenshot in the article breaks words in half. There's a library that does it already; it just needs adding to Chromium: https://github.com/google/budoux/
Might be worth noting that Chromium seems to only use subpixel rendering on high-constrast text. I've also seen sites that don't use it for some other unknown reason, for example Microsoft Teams.
Chromium can't render fonts at all after they got rid of GDI rendering. Just compare Verdana 12pt in Word/LibreOffice and Chrome, they're not seriously trying to tell me that's the correct font? The letters don't even look the same.
Historically Firefox did have a GDI fallback for certain fonts specifically to address this user complaint, in order to make them look "right". I don't know when/if it was removed.
The pref name is 'gfx.font_rendering.cleartype_params.force_gdi_classic_for_families'. I believe it may have just enabled a GDI compatibility mode for DirectWrite, but I'm not sure if that uses the GDI rasterizer under the hood or not.
It most certainly uses GDI for Verdana. Firefox tried to disable that by overwriting the previous default setting for gfx.font_rendering.cleartype_params.force_gdi_classic_for_families and gfx.font_rendering.cleartype_params.force_gdi_classic_max_size, but restoring them fixed that.
Also this is not important. What's important is that DirectWrite does not render this font correctly, if you compare it with LibreOffice or Word.
What about all the designers that used to handle this issue by tweaking their font weights, colors...?
Changing a long standing issue with a lot of users will break things for some people (using edge, with an eyes for these details..)
Web designers must simply accept that they are working in a medium that is not amenable to (sub)pixel-perfect reproducibility. It is not—and never has been—safe to assume that future versions of a given browser will render identically to the current one. It is not safe to assume that the browsers you are fine-tuning for will remain popular in the future. It is not safe to assume that other users with the same browser version (or a browser claiming to be the same version) as you are using will get an identical rendering as you get on your system.
As a web designer, you must accept that the browser is fundamentally not yours to control. It is an agent acting on behalf of the user, not on behalf of you.
Do you genuinely believe those 4 years were spent writing code?
This is why wet behind the ears tech boys can't be trusted any more. They really think that the hardest part of software, the thing that slows us down, is writing code. Really!?
Kid, I'll offer you some free advice. Writing the code is the least difficult part. Deciding what and how to write (and what not to bother with) is a critical step that has nothing to do with writing code. Designing the architecture, ensuring it's correct, leaving something well written and maintainable for the next grunt, documenting code so it's easy to understand and to review, ensuring your code supports all the desired use cases and interactions with users and other code/apps/etc, iterating on it until it's polished, and then actually maintaining it and fixing bugs that are inevitably going to be there if the code is sufficiently expansive. Those are just a few of the things that aren't "grinding code" as you want to make software.
Read a programming book for Pete's sake, and stop assuming you can just fake it till you make it because you are part of what's destroying software for the world and it's got to stop.
> After a lot of user research, members of both Edge and Chromium determined that a contrast value of 1.0 closely matched the text rendering of pre-Chromium Edge and looked consistent compared to other native Windows applications.
So they decided that not deliberately lowering contrast will fix the contrast problem. So this basically stems from that annoying designer trend of grey text on grey backgrounds that was in turn based on the false assumption that our displays have infinite contrast.
One has to only look at the... storied (to say the least) history of Microsoft's first-party 2D graphics/text APIs to see it took a lot of iteration to get to this point; and it leaves me wondering when the next inflection on the learning curve will be released...
The subpixel layout of OLED screens is different than the the traditional layout, so text ends up looking pretty bad. Patching ClearType would be the first step to fixing this issue. I'm surprised that none of the display manufacturers have tried twisting Microsoft's arm to fix this issue. At the present moment OLED screens are the superior display technology, but cannot be used for productivity because of this issue.
[1]: https://github.com/microsoft/PowerToys/issues/25595
reply