Designers obviously love their Macs, and while using them produce a design that is at the borderline of what's ok on their screen and then is unusable on lots of others.
I have a big, bright, high contrast monitor and good eyesight (in theory the best case), but it's connected to a Windows PC and various websites render as hard to read. At least I'm savvy and so can use StyleBot to fix the worst offenders!
In college, I helped design part of my student newspaper. Images that looked beautiful and had perfect contrast levels rarely survived the shitty printer that darkened everything.
It's weird to think about, but designers should never put all their trust in what's on their screen.
If you aren't empowered to trim the content, you can at least tone the type down in contrast & size to make it appear to be less. Every time I see subtle grey text on a marketing page, I can hear the designer thinking, "no one is really going to read this anyway."
Just like in speaking, my advice is to make your words bold but few.
To be clear, this is something I'm mulling over, not a hard conclusion, but here it goes...
Creating a simple, server rendered, text based web site may be exactly what the users need, but from a developer's point of view, it can also be a career risk. A rapidly increasing percentage of development jobs out there now require experience with a front-end framework. As a result, developers need this experience, so they start adding front end frameworks in order to 1) gain that experience, and 2) document that experience in a real world project. Remember, a project isn't just a project, it's an audition for your next project. This is how tech recruiting works. So developers may be adding new bells and whistles that are not only unnecessary, but actively harmful to their current project, largely because they are necessary to their resume, and it's actively harmful not to have them there.
In short, one of the reasons the web is becoming unreadable is that web developers can't create the resume they'll need for their next gig by using the best tech for their current one.
 I'm getting long in the tooth enough to have seen this a few times. Companies required "EJB", it wasn't enough to have experience with Tomcat and standard Java. Later, Spring, Struts, Hibernate, iBatis were required. These days, it's ember, react, and other rapidly evolving JS frameworks.
 I want to be clear that I think many of these frameworks are actually pretty excellent, for projects that need them. The problem is, if your current project doesn't, you still have a strong incentive to bring them in.
I think we are increasingly needing a stronger separation between functionality oriented applications that operate in the browser vs media-based content (documents/videos/images/articles).
The web has become an unnecessary-hostile platform for web applications because you are standing on a thin layer of glass the whole time because you have to juggle so many combinations of environments and browsers and capabilities and devices and screen sizes and internet speeds and everything in between.
For example there's an insane amount of baggage involved today in deploying a decent production web application in terms of bundling the assets caching vendor prefixing feature detection and so on..
So much of that could go away if we had a more application-focused platform on the web.
So someone could say you want a web-based customer service desk application? Here install this 30MB bundle once and let us focus on everything else. It's not a big ask.
It just adds unnecessary complexity and baggage.
As a long-time web dev I'd like to see the web platform for applications evolve into a more friendly and predictable environment for the developers more similar to the mobile app platform.
That was basically Java applets, and then later JNLP and Flash.
We didn't like JS back in 2002 either - the Java ecosystem was much more mature and easier to program with. But we put up with it because users expressed a very clear preference for DHTML apps over Java applets, and money follows users, and devs follow the money.
All the tree-shaking, code-splitting, minifying, etc. is there to preserve this "loads quickly" property as JS apps have gotten more complex. A lot of sites do it poorly, but think of those sites as prey for an enterprising startup willing to put the extra effort into snappiness.
The method that we have makes sense for frequent hopping between different documents on different sites and spending some minutes on each. As was the case with the original "web of linked documents".
But now there's undue baggage and pressure and complexity imposed on application developers because on one hand they are developing sophisticated applications that people use daily for many hours.
On the other hand they have to worry about the most trivial things like if some library of bundle file is 200KB or 1MB.
A trivial thing turns into something complicated because you need fallbacks and polyfills and fallback for your fallback and you know the rest. It hurts the developers but also the users because time and effort that could be spent on the product is spent
on this kind of stuff.
That's why I think the separation of content and application is becoming more important.
If you want snappy media that is not bloated and responsive and readable and open and all that fine.
But if you want a full-blown CRM or spreadsheet or customer support desk software and the like that you spend your day on let's give the developers a break and give them a predictable environment with a robust method of application delivery, without bean counting the bytes, and empower them to focus on the real problem that they are trying to solve for you.
Should desktop/laptops be the same? Is it only because desktops/laptops started long before the internet was fast enough to support that apps that they don't run like phones/tablets?
I don't want a locked down desktop/laptop. I'm only pointing that that more people than own desktop/laptops are constantly downloading and installing 30M bundles
I can think of at least two instances in the last week where I've had a desire for the functionality offered by a big-name product (Groupon and Zillow), went to their mobile website, was confronted with a nearly-nonfunctional mess and a prompt to download their app, and then said, "Eh, I don't care enough to bother and wait for it to download."
In so far as your web app really is something that should be a full scale app (google maps being the classic example) this is a great trick. But as an industry, we have incentive to shove complex apps down people's throats when the use case requires plain-old-media with just a bit of dynamic stuff for navigation and forms.
I prefer web apps because when I go to a bar and want to order from my table, or want to see the schedule for a conference I'm attending, I don't want to go to the trouble of installing a new app for something I'm only going to use once. This used to be a lot worse when apps asked for a load of permissions upfront which it wouldn't be sensible to grant, but these apps still often take up around 100mb in storage, which can mean having to find something to uninstall if you have limited space.
That's a pretty horrid mischaracterization. Users may download and install apps that are tens of megabytes per bundle, but at least the downloaded blob is then entirely local. The apps don't redownload themselves on every startup.
Let's compare that to a bloated megalith webpage with framework machinery, a dozen trackers and $deity knows what else. "ETag has changed, please wait while we drip feed you this >20MB JS+CSS gob. Oh, and btw - you can't see the page text at all until the download has been processed."
We already have that platform, the desktop. Sure the distribution side of things could be improved, but there is nothing wrong with the platform.
Here's the thing - people don't like to bring in a more complex technology simply to do exactly the same thing as they could do with a simpler one. They need to justify its presence in there somehow.
That's the connection I'm getting at here (and once again, it's something I'm mulling over, not a hard conclusion I've reached).
It's rarely the case that you change design because you selected a new technology; rather you usually select a new technology to enable changes in design.
Take this block of text in the article:
"I thought my eyesight was beginning to go. It turns out, I’m suffering from design."
Instead of leaving it a legible black and have the divider + italics convey the separation from the article...
They went with rgba(0,0,0,0.5) on a font that is hard to read on a white background with that shade of a gray. Try removing those sort of CSS rules from the Wired Article and you'll find it much more legible.
They repeat this sin whenever they quote someone such as:
A color is a color isn’t a color…
…not to computers…and not to the human eye.
They also put a ton of whitespace around the quote which is already enough to separate it. The absurdly annoying light gray is annoying and not required.
One thing I notice in my work (in product management) is that people naturally gravitate towards shiny things. So we design things to be shiny in order to attract those people.
At various points, a specific aesthetic gains popularity because, for well-founded reasons, it increases usability and readability. Then everyone starts to imitate it, partly because it works, but also partly because people are drawn to it. Then frameworks are created to enable teams to rapidly replicate the style.
At a certain point, the frameworks are good enough to just replicate the style in basic settings with little input needed from someone who is actually thinking really hard about the design. In many ways that's a good thing because it lowers the barrier to entry. It's much easier to make things because the framework drastically reduces the amount of wheel reinventing you'd need to do. But you end up with designs where nobody really thought that hard about it - and so you end up proliferating potentially bad defaults encoded in the frameworks.
I'm personally much more willing to believe that the decline in overall design quality is due to negligence from convenience.
The web as a publishing platform shouldn't need React or 2MB of js libraries
Web apps are a different story but have also different requirements
I've been mostly a web developer for the past 20 years and I can't understand why there isn't a good solution yet
Look at 4chan or Craigslist
They're highly successful and they serve pure static (ugly) html
I think the reason is simpler, more like fashion: most people who are in a position to play with fonts and colors have absolutely no idea what they are doing and make decisions by copying somebody else who they perceive as cool, and there is always pressure to make it look cool, so it spreads.
To be clear, the article is almost solely about contrast between text and background colors, not JS heavy frontends. But I think what you've stated is very true and is also my opinion about the current state of web frontend.
Like another user said here, the contrast is a result of designers working with a page with too much content, and trying to reduce the feeling of content overwhelm. I don't think heavy JS frontend is directly related to that. But what you've said should not be discounted.
"My plea to designers and software engineers: Ignore the fads and go back to the typographic principles of print — keep your type black, and vary weight and font instead of grayness. You’ll be making things better for people who read on smaller, dimmer screens, even if their eyes aren’t aging like mine. It may not be trendy, but it’s time to consider who is being left out by the web’s aesthetic."
Is that such a terrible thing? HN seems to me highly accessible and not terribly aesthetically tuned, and it's one of my favourite sites to visit, purely from a navigational viewpoint.
IMO ever since we were rescued from the old-school print designers & design-challenged developers of the Early Web (up to ~2005-07) by the new wave of digital-first designers, web design's done nothing but get worse.
They don't care about the usability of their creation, they only want something which looks good for their portfolio.
Have you ever noticed in science-fiction and other speculative genres, computer systems almost invariably have clean consistent user interfaces. That's what consumers want - relatively simple UIs that work the same for all the content they deal with. that's why things like Bloomberg Terminal are popular, that's why people fall in love with the UIs in videogames.
What retailers and other businesses want is to be different and stand out from their competitors. Since the costs of entry to the web are much lower than those for print (billboards or magazine publication) there is correspondingly more visual clutter. But this isn't a new thing with the web, it's been around as long as there have been magazines. I suffer from severe ADHD; I tend to avoid large racks of magazines in stores because looking at all that imagery and typography at once is very similar to being screamed at by a crowd of people.
This does not mean I hate all graphic design and typography, far from it. But where we went wrong (imho) is that the web became about making every page distinctive instead of concentrating on hypertext and letting people customize it at the client end. Facebook beat MySpace because FB is consistent and graphics are subordinate to the white-and-blue color scheme. Wikipedia succeeds in larger part because it's visually consistent. GeoCities is a graveyard because it wasn't.
The basic problem is that absent other incentives money typically flows towards novelty. But competition alone rarely yields optimal results; rather it tends towards a lower common denominator.
To your analysis I'd add: companies, because of business pressures, mostly try to milk you instead of helping you. I sometimes call it "extracting value from users" instead of "delivering value to users". The consequence of that is, they want to control as much of the interaction with you as they possibly can. On the web, it means they want to control the rendering phase. As a user, this is precisely what I don't want them to do. I want their data, not their presentation decisions. This conflict is pretty visible where it comes to ad blockers and reader modes, and also one of the big reasons people resort to scrapping websites. In a perfect world where companies cared about delivering value to users, scrapping would not be necessary.
I'm not a UX person by any margin, but I thought about what you said and I... agree. I wonder if there are any UX studies done on whether customers prefer consistency in UX or not.
1. The belief that very high contrast black one white contributes to eye strain for some people. I think this one is reasonable - read the literature on how I help dyslexics and this tends to come up. I tend to use a very dark grey instead of black - about 90%
2. A desire to create visual heirarchy without end up with a horrific fruit salad... visual hierarchies are important in text. You can create them by altering font, size, style or colour. Ideally you should be able to create the heirarchy subtly - so the reader understands what is important, but while keeping the site looking clean and sophisticated (see point 3). Changing the greyness of text is one more tool in the box - and where heirarchirs are complex designers can find themselves hunting for tools
3. Minimalist fashion. No one wants their page to look old fashioned. No one wants to establish a visual hierarchy that looks like a Geocities page.
So the temptation is to keep colours muted, to keep style and font changes to the minimum. Using shades of grey to provide almost subliminal cues sounds cool.
The answer to the first question is simple: it doesn't do anything Chrome doesn't, so there's no particular reason to use it unless you care about maintaining competition among browsers.
The answer to the second question is also simple: start acting in the interest of the user! Get rid of the annoying sticky headers. Increase the contrast when it's too low. Increase the font size when it's too small. Word wrap when stuff is going to go off the edge of the page. Narrow the display width when text is too wide.
When I rebuilt my blog (http://flukus.github.io/) I tried to build it with reader mode in mind but it's hard, there are some hidden heuristics to trigger it's availability, you need a paragraph with at least 67 characters or something like that. So the articles work in reader mode, but the index doesn't. It doesn't work at all on file:// paths, so not great for proof reading.
As a developer I'd love to be able to put a meta tag in the page to always turn it on. There are a million other ways they could improve it too. I hope this is the future of the web.
Now, how to increase awareness of that feature?
Considering that reader view has it's own dedicated button and all, I'm not sure how it possibly could be made more promenient without being annoying. The popup already was bit on the edge of being annoying.
The problem is shortsighted business and design decisions that have prioritized squeezing every last fraction of a penny out of the page, borrowing against the user's perception of the site. In the long run, that debt is going to come due and the user is going to stop coming back to your site from lack of cost/benefit.
A couple of examples where IMO JS makes the page superior to one without
It doesn't seem to me that JS by itself is the issue here.
I'm glad I wasn't the only one bothered by that, especially after the author has just complained about reduced-contrast offset boxes.
There is less whitespace if you turn ad blocker off -- it turns out one of the things uBlock Origin is blocking is a social media widget on the left hand side. http://imgur.com/ip093TX
However, that ad in the middle of that screenshot pops in and out of what you are reading as was described by the OP. It's very annoying.
With adblocker on the whitespace is not optimal, but at least the user experience is not terrible.
The same goes for font size. Due to the range of device sizes and display dot densities, small fonts can easily become unreadable. Mobile versions of web sites helps only so much. I find at least half of Apple's system menus to be unreadably small on my iMac 5K monitor and my Macbook retina. This illegibility needs to be user-fixable in a systematic way, and ASAP. Just telling the user, "Use an Accessibility service", is NOT the right response.
Then the author puts quoted highlights in a pallid gray through the article. I guess this could be intentionally ironic but I sorta doubt it.
(#7F7F8F on white = contrast of 3.9)
I love that, nice strong contrast. Turn down the display brightness if that's too strong…
Just take a look at this screen capture (not mine, found in 2 minutes search):
It may not look 2017, or even 2005, but who effing cares! I could stand in front of it for hours, as I do with my dark desktop, while anything resembling a white page hurts my eyes.
I'm aware that printing could be a valid argument against (dark background==huge toner/ink waste), so when printing make the theme change on the fly in a way that colors being too close in their component values won't be used as FG versus BG or highlighted FG versus FG.
If your company supported basic a11y then you would have an argument to use against designers that like low contrast text. It might also be a chance to educate them on WAI standards if they don't use them for their work yet.
Maybe a bit "extreme", an "artistic site" has all the rights in the world (+1) to be "artistic", but maybe it has been a tadbit overdone.
Among other poor style choices, they abandoned text borders some years back - text blocks are within tiny fractions of character width from high-contrast (often just black) page features.
I guess it's what they call "edgy", literally. So hard to read!
But I may be the last remaining subscriber anyway.
I don't use an ad-blocker.
The quality of their journalism usually isn't much better. Was it this time?
#191919 is "real black."
And typography is a completely different subject to painting. People have aimed for darker blacks from the very beginning of ink technology. Even now, people put up with the hassle of dip pens instead of fountain pens because they allow for darker black inks. Extremely high contrast is considered a mark of quality in printing, so why should it be any different on screen?
The web has become unreadable indeed Wired, I'm not sure if contrast ratios and typography are the first things that come to most people's minds however.