Hacker News new | comments | show | ask | jobs | submit login
A div that looks different in every browser (twitter.com)
444 points by shdon 3 months ago | hide | past | web | favorite | 130 comments



Here's an animated version with some transparency https://codepen.io/anon/pen/PaMVYX?editors=0100

The root issue is that the browsers are being asked to draw a line that's 100px in from the edge of a 100px box -- it's an impossible task, so there's no perfect solution.

This CSS test suite http://test.csswg.org/suites/css-ui-3_dev/nightly-unstable/x... suggests that browsers should treat it as though it were the largest offset that made sense at the current outline width, giving you https://codepen.io/pjwdev/pen/vrovPR?editors=0100

Safari achieves this if the border is the same colour the whole way around, but doesn't get the individual sides in a sensible arrangement.

Edge doesn't match the test, but it does another reasonable thing -- it takes the offset as a given, and uses the biggest outline width that makes sense.

Chrome and firefox...


The negative value case is:

> Negative values must cause the outline to shrink into the border box. Both the height and the width of outside of the shape drawn by the outline should not become smaller than twice the computed value of the outline-width property, to make sure that an outline can be rendered even with large negative values. User Agents should apply this constraint independently in each dimension. If the outline is drawn as multiple disconnected shapes, this constraint applies to each shape separately.

There's a lot of RFC 2119 SHOULD there, unfortunately.


Joke: A man walks into a picture framing shop and says: "I'd like a 10cmx10xm frame for my 10cmx10cm picture, but make it negative 12.5cm away from the picture." The shop staff is confused, but makes a wild guess: "So you want a frame with each border 2.5cm on the other side of the picture than it would normally be?" The man just takes a note and goes to another shop.

He collected the wild guesses from various picture framing shops and posted them on twitter, getting a lot of attention on Hacker News as well.

edit: present tense.


This sort of badinage is what makes HN so endearing. Thanks!


I guess the post is supposed to demonstrate that there should be no "wild guessing", and instead each browser should be producing the same result.


And here's what it looks like in Servo: https://i.imgur.com/lv3xRhi.png


I was 50/50 on whether this would be a legitimate rendering or a cheeky screencap of a panic message. :P


What I found interesting was that I later noticed Matt Brubeck had already done the same thing a couple of days earlier [0], yet mine in the nightly I'd just downloaded was already different.

[0] https://twitter.com/mbrubeck/status/1015310959789740032


Also, Firefox nightly with WebRender: https://i.imgur.com/Ffg0bS7.png


Are those just compression artifacts or what?


No, that's just scrot being buggy

Fixed: https://i.imgur.com/jAVrnBE.png


Yes, the actual render is crisp for me.


Here is how it looks in GNOME Web Browser: https://twitter.com/niu_tech/status/1016087312554450944


On one hand I think this is really neat. On the other this is why I have never taken front end development seriously enough to spend any real time learning it [0]. From the get go it always seemed like the rules were arbitrary and involved lots of guessing to get things right. I know this isn't actually true but it felt true enough early on to poison my mind to it.

[0] Not something I'm proud of and I know front end development is serious business so don't @ me.


" I know this isn't actually true "

It's true.

A confluence of vague definitions, poor implementations, bad documentation ... and this is the web we have. It's the tip of the iceberg, and the painful life of anyone doing html5 work, it's 'not fun' for anyone who's been exposed to regular programming because the problems you spend your day solving mostly should not be problems to begin with.


I agree. I recently started learning front end development. My reaction after grasping what modern frameworks do is “my god, what have we done?” It seems like we’ve expended a huge amount of effort to hack the web into something we can build SPAs in (and what an insightful TLA that is).

I wonder if this sort of situation was inevitable or, given sufficient coordination and standardisation, we could have built a better platform. There are some problems that would have to be overcome regardless (eg arbitrary display size[1]) but languages like CSS and JavaScript look like very bad decisions.

My first instinct is that a new platform should be developed based as far as much as possible on tried and tested languages and technologies, but one that also allows for fast-paced browser dev and a path for future evolution.

For now I’m just embracing it and learning. It’s a lot of fun to see how the web works nowadays - I’m already beginning to understand some of the types of bugs that sites exhibit, and why designs are as they are. One of the outcomes has been to develop serious respect for the developers of MS and Google Web apps - eg OneNote on the web - as what they’ve achieved is pretty miraculous when you consider the stack. And those poor web browser devs!

[1] the idea of allowing arbitrary dimensions displays is a very strange one if you consider traditional graphic design; this may be an instance of us geeks making decisions that don’t make sense to more experienced professionals from other backgrounds. How much did graphic designers, UX experts, psychologists etc impact the evolution of the web standards? I don’t know.


"How much did graphic designers, UX experts, psychologists etc impact the evolution of the web standards?"

They didn't. It's just a mess.

We want to believe there is genius in the commons, but really I don't think there is a lot of coherence. It's just some small group thought it would be good to do this or that, and it made sense at the time.


This is what us older people would like to believe. Meanwhile the young webdevs are having a blast ignoring us and building the apps of the future.


Nah, it's mostly true (I'm one of those young webdevs), but I'd attribute the pain points in front-end development to slightly different problems than the state of browser inconsistencies and spec problems.

IME 70% of the problems people run into are self-inflicted due to overengineered solutions. I spend a lot of time lurking front-end IRC channels and most of the problems that get posted is stuff that I have never had trouble with, with libraries I'd never even consider using, that I don't have the first clue how to fix without going back to square one and re-engineering the solution from the ground up. The advice given by people that do attempt to help with these problems is often awful and creates more confusion.

For the most part, I don't think people realise that libraries add to the complexity of their software, rather than abstracting it away. Every front-end dev desperately needs to go and read the Law of Leaky Abstractions, because they're getting smoked by it on a daily basis and they don't realise.

The other 30% of problems come down to what GP mentioned, ambiguous specs, browser inconsistencies etc. Stuff like the relationship between x-overflow and y-overflow, little flexbox niggles like different behaviour depending on if flex-direction is set to row or column etc etc. This stuff can often be worked around with simple but unintuitive fixes (usually some fuckery with negative marigns and such), the two difficulties lie in discovering the solution to begin with (there's boatloads of bad and outdated advice everywhere), and running into these solutions when they haven't been properly documented (usually the case if some junior programmer has just ripped the hack off SO without understanding the theory behind it).

The secret to being good at front-end dev is to develop strategies to handle these problems, and to pick your resources very carefully (much as you would in any other ecosystem beset with noobs, like PHP back in the day).

I believe (based on my experience) that it's possible to build very successful software with very horrible engineering. Good and successful aren't necessarily synonymous in the world of big VC money. A great example is this: I need two hands to count the critical failure bugs I've run into with Uber's products in the last year. By critical failures I mean things that have flat out stopped me from being able to use it. I strongly suspect (a subset of) their engineering sucks (sorry to anyone here that works for them!). But hey, I don't own a squillion dollar company so what do I know? ;)


Yeah and then they all jump around React as some amazing achievement, having rediscovereed how 90's native UI do event handling.


React isn't really anything like 90s UI libraries. It is, however, very much like the fairly recent native immediate-mode GUIs that have come out (https://github.com/ocornut/imgui, https://github.com/PistonDevelopers/conrod, etc).

The core feature of React is that your UI is a pure function of your application state. In (e.g.) the Win32 UI, you would receive events, and then need to manually transition your current UI state to the new UI state (hide this button, disable this input, etc).


Smalltalk UIs used a pattern called dependency propagation to make objects update themselves based on system wide notifications.

Similar idea was used on Oberon System 3 Gadgets toolkit.

Also java.util.Observable based on the same Smalltalk ideas was already part of Java 1.0.


That's still event handling. You can do basic binding between outputs and values, but more macroscopic changes (closing a dialog, changing tabs, etc) requires tracking the state your UI is in, and manually transitioning.

This is markedly different than React, where the framework diffs the desired UI state (which is a pure function of your application state), and the actual UI state. The minimal number of UI operations necessary is then applied to make the latter match the former.

ImGui above works the same way, which makes it very popular for devtools in game development.


ImGui is quite similar to MS-DOS game UIs, before we migrated to Windows.

The source code of quite a few ones is available around the Net.


Event handling isn't special in React. One-way data flow and UI-as-a-function-of-state are. Which 90s native UI toolkits do that?



and think vi is great


> having a blast ignoring us and building the apps of the future

Building technical debt for the future I tend to think, which I guess is good for keeping future devs employed.


My experience is that young web-devs don't even know the dystopian matrix they live in, and when they discover 'regular programming', in many ways their perspective changes. Both web and regular dev have their faults but the the King of Byzantium is HTML5.


I take it you don't think much of the various libraries/indirection layers/frameworks that are meant to save you from browser-specific tuning?


This isn't really a spec thing. Its more a manifestation of undefined behavior. Even c++ compilers have undefined behavior.


I worked in web standards bodies for many years (mostly TC39, but also the initial webgl spec, and random clumps [mostly canvas] of the dim specs)

One of the goals of all the web related committees for more than a decade has been to ensure that there is no undefined behaviour. Undefined behaviour is exactly the reason we used to have to have different versions of a webpage for every browser.

The only intentionally remaining ambiguities are in areas where there is significant existing content that depends on mutually incompatible behaviour, eg: content that does one thing in browser A, and another in browser B, but is using the same API in each path yet expecting different behaviour. I can’t recall any specifics off the top of my head any more, but a classic has always been for(in) enumeration along the prototype chain when properties are being added and removed. @dbaron or @brendaneich (that his account? I can’t recall) may recall some of the others. Prefixes (that everyone on HN hates) solved this problem - it makes it possible to keep that old non-spec API around, without permanently screwing up the final specification (every points to those old -webkit-gradient properties, but forgets that that api was much clunkier to use than the final gradient spec. If the prefix hadn’t existed then the webkit api would be the permanent one that everyone would have to use)

The other big one where there’s specified behaviour that is allowed to vary is webgl. I was kind of responsible for the hated “only support the minimum subset” rules of version 1, but I believe modern specifications allow an implementation to vary according to hardware. The only other place in webgl that I made people (gpu engineers mostly) unhappy was not allowing undefined behaviour when accessing beyond the end of an array in the shader - there were voices that literally wanted to allow this to be undefined for perf reasons even when shown you could read data from other execution environments (this was mind blowing). The compromise in the end was allowing an implementation to return either zero, or some other constant defined value (I think this would fall under the C/C++ “unspecified” behaviour banner).

Anyway, in general undefined behaviour is bad on the web, and all web spec authors try to ensure that every edgecase is covered explicitly so that we never get stuck back in the terrible 2000s again.


CSS outlines are still only vaguely defined and tables even less so (I mean, table-layout: auto, which is the initial value, is only now finally getting specified; it's probably the least defined but most relied upon bit of the web platform now). There's definitely still places where undefined behaviour still lives, especially around CSS.

Without actually digging into it (I'm on vacation damnit), I'm pretty sure all renderings are valid per CSS 2.1 (though I don't think they are per CSS UI 3 or 4? but maybe that's still a SHOULD level requirement given people wanting to publish as REC ).

The CSS WG has been pretty poor at actually tightening up specification for old features (on the face of it, due to lack of editorial time, which really points at a separate problem).

(Also, given the for(in) example, as I'm sure you know but for the benefit of others, note that iteration order of each object on the prototype chain is still undefined, and not interoperable.)

Some other good examples of things that have been standardised over the past decade: HTML parsing(!), what specific DOM exceptions are thrown all over the place, when the various document load events fire.


Sorry yes for(in) over prototype chains is where mutation of the property list actually becomes visible -- mutating own properties on an object while iterating is specified IIRC, it's what happens when own properties alias properties in the prototype chain, or when mutating properties in the prototype chain. I blame Brendan :D


Pretty sure mutating own properties is undefined, but I'm years out of working on any JS VM and only vaguely pay attention (and looked into this a few months ago and was surprised to find iteration order still undefined!). I wonder how many unique behaviours we have with iteration order now… Is Gecko the only odd one out? Is there still any implementation that has different behaviour with dense/sparse arrays (by which I mean properties not the Array object)?


Own properties are well define (and were in 3.1 IIRC). Deleted properties are not visited, added properties are not visited.

But behavior when you deleted an own property that shadowed a property on the proto chain, and what happens when you add such property back.

Also the historic spec mandated integer properties on regular objects were iterated in insertion order as well, but V8 chose not to implement that as doing so broke their "Indexed properties are just an array" semantics. Because of the huge perf impact other browsers had to go down that same path.


Nah, the spec has never defined it; it never even mandated insertion order. (I remember a bunch of discussions about this given we, in Carakan, decided to match V8.) https://tc39.github.io/ecma262/#sec-enumerate-object-propert... is the current definition of object enumeration.


But in c++ when there's undefined behavior, the spec specifically says there's undefined behavior. It doesn't sound like the spec is specifically saying there's undefined behavior in this case.


Is omitting a specification of undefined behavior the sane thing to do, though?

I was thinking that "undefined behavior" in a spec is a serious misnomer. It's paradoxical if anything is defined as undefined, and it lets you confuse it with actually undefined behavior.

If something is in the spec as undefined behavior, it's my understanding it could legitimately lead to, say, global thermonuclear war. No guarantees.

But if something is not in the spec at all, say the effect of having your "const" keywords bright blue, then the spec is not sanctioning the end of the world happening as a consequence. The entire world provides constraints of reasonableness and other relationships that must hold.

This seems like an important difference! The latter is what I would call literally undefined; the standard version of undefined is (a) inherently defined as undefined, and (b) falsifies itself, because nobody would actually use a compiler that could cause the end of the world, ever.

The discussion of undefined behavior and the concomitant blaming of those who inadvertently invoke it have gradually come to be associated in my mind with the blaming of people for self-driving car crashes. They may not be related exactly, but homomorphic. It seems like popular and gradually spreading pathology in human thinking. Maybe it could be called "computism" after "corporatism".

I guess maybe the issue is that the world is mostly composed of "unknown unknowns" and it is incredibly attractive to a lot of programmers to declare them Somebody Else's Problem. But I'm tentatively suggesting you can never just disclaim those "unknown unknowns" and therefore you should never fall into the trap of trying to define the undefined. It's a vast ocean, not a few drops of water you can build a fence around. Unfortunately this sounds, even to me, like vague philosophical meanderings, but I strongly feel that there is something like an intellectual prion disease out there, and that impression has been building ever since I was browsing comp.lang.c.* in the 90s.


Undefined behavior exists in a spec for efficiency reasons. For example, accessing an out of bound index in C remains undefined as Having the runtime check boundaries on every index access is an inefficiency reserved for less efficient language specs like python.

Rust aims to solve these problems with something called zero cost abstractions.


I'm not 100% on how explicit it is (IIRC, most of this is given by setting constraints on the rendering), but it's certainly deliberate that this is unspecified.


Especially C++ compilers have undefined behavior


Yes. But are you going to to use 3 to 5 or mote C++ compilers on a given project?

The worst part of these frontend issues is, ultimately, the UX can become inconsistent, if not mess. Given that nearly everyone interfaces via the visual UI, you'd think we'd have this sorted out by now.


If your project is a library, I'd expect that people are gonna use it with, like, maybe not 5 but probably 2 or 3 different compilers, and they're gonna be less diligent about upgrading their compiler versions than mainstream browser users.


Thanks. I wasn't thinking about it that way. Good point.

That said, the browser's impact on UX is essential. A lot more people use browsers than compilers. You'd think the UX would matter by now.


Does a single compiler work exactly the same on every architecture? :D


You're using the words wrong. The code may have undefined behaviour.

For example, a race condition is an UB. Compiler won't invent locks that were not specified by the programmer in a code with some raw threads. As a result, CPUs will trash the memory.


Honestly its getting rarer and rarer to find things like this, most things that reasonable people want to do "just work."

You have to go to considerable effort to come up with a case like this.


What is the status quo on vertical alignment if I may ask..



The code shown on that page actually doesn't work, though the code in the github repo does.

And this exposes the problem with the current state of the art: yes, you can probably get flexbox to do what you want [@] if you know how. But if you don't know how, trying to figure it out is still very hard. You should be able to write:

    <center>stuff</center>
or

    <vertical-center>stuff</vertical-center>
There is really no reason that should not Just Work. (The <center> tag has worked for 20 years, notwithstanding that it has been deprecated for most of that time.)

---

[@] What if I want to put something in the lower left or upper right hand corner?


https://codepen.io/anon/pen/NzQQJm

This is a gross abuse of html, but if you really want that syntax, you can have it.

I can't say I understand all the complaining here. You want to make $200K a year sitting at a desk, but learning flexbox or googling when you need it is too much effort?


Brain fuck has pretty simple syntax too yet nobody wants to code in it because its trashy and inelegant.

Im tired of programmers who've become experts in building structures by ramming screws into wood with a hammer claiming that all is well with the world when you have powerful hammers and screws.

I think we have a whole generation of front end programmers who haven't seen a screw driver.


Where is this $200k wonderful place?


For 200k I'd consider working even on JS and front-end xD


Your code breaks the <center> tag: if you try to nest a <center> inside your <vertical-center> it doesn't work.

And my complaint is that I believe that computing should be accessible to everyone, and that simple things should not be made difficult just so that an elite guild can command above-market wages to fix problems that they themselves created. I don't believe in broken-windows economics.


Wait, front-end developers who use CSS created the problems of CSS? That's really funny. I mean, that's the most mind-blowing accusation I've heard in a long time. I would have to suggest you are commenting on a topic that you actually know very little about.

But at least you are entertaining.


Who do you think created CSS? CSS was not handed down on stone tablets by the gods, it was designed by humans, and I presume that at least some of the people who designed CSS actually used it (though to be honest I don't actually know who was on the committee). I don't know if it was their intent to make something Byzantine that served as an effective barrier to entry to doing front-end design, but that was in fact what they did.


Who created CSS as we know it? CERN. More or less.

Who first tested and implemented CSS? Browser programmers.

These days it's under the control of committee. I think I would disagree with the notion that CSS devs are the ones to blame in this case. Sure, there's lots of blame to share for the problems. To imply that CSS developers intentionally forced it to be complicated for purposes of job security is laughable.

After all, CSS is not C++.


I didn't say that was their intent, only that the current state of the art has that effect. And that I think this is a bad thing.



Cool, thanks!


You should not be centering things in HTML, that's the stylesheet's job.


Sure, what's the CSS to make a div be centered vertically?


display: flex; align-items: center;


Doesn't work unless you have an extra container div.


Sure, because "vertically centered" means relative to what?


I want the contents of my div to be centered inside my div.

Just like <center> works, but vertically. You don't need <foo><bar>content</bar></foo> for horizontal centering. I consider it a layering violation to have to do that nesting.


Maybe we are misunderstanding each other, but while you don't need an "extra" container, you do need a container with some known height to vertically center the content in: http://jsfiddle.net/c20gq4pz/


This is true, but every 'modern' DOM that I've seen has loads of extra container divs precisely to allow post hoc layout.


To be fair though, this has been a problem for quite some time and mostly stems from many web devs not understanding block and inline correctly and thus applying unneeded divs everywhere even there are enough hooks for their styles already.


Or you can use the body: https://codepen.io/jodi/pen/oMvLaN


I have no objection to having a style sheet that defines what the <center> tag does. But it should at least be possible to have a center tag and a vertical-center tag. AFAICT it is not possible. At the very least you need both a center-capable container and centered elements.


I don't think it should:

You're always centering children with respect to something, and you specify the fact that you want children centered on the thing you're centering them in, rather than the elements themselves. This makes more sense, in my opinion, and as said in the sibling comment, it's two simple rules.

I do a tonne of flexbox and grid in my day-to-day, and these complaints are quite simply out of date. Almost any layout in CSS these days can be done easily and intuitively.


By out of date do you mean around 1 year old? I'm sorry but your statement will likely be out of date soon.


> trying to figure it out is still very hard

If you are learning CSS in 2018, then there is little chance that you didn't encounter flexbox and its capabilities. Nowadays there are many good, crossbrowser solutions for common problems, but I have to agree, intuition is not a feature of CSS.


So should there also be a <rightalign> and a <leftalign> and a <topalign> and <bottomalign> and <spaceequally> and and and...?

If you want to lower left align you set align-items and justify-content to flex-end and flex-start. For upper right, flip those. (if you’ve changed flex-direction those will need to shuffle around.)

I seriously don't understand why having a different tag for every possible alignment would be somehow easier or more discoverable than just using css rules. Novices are going to have to be looking stuff up either way- I get the concern about discoverability but I strongly doubt anyone is just typing tags to see what works.


I would prefer halign=left/right/center/justify, valign=top/bottom/center. But yes.


Grid or flex.


Try writing C meant to compile in different compilers, on different processor architectures. Or ANSI SQL meant to run in a variety of RDBMSes. Web GUI is not the only bastion of arbitrary behavior.

I would wager, if your goal is to make as wide-spread, cross-platform GUI as possible, a Web GUI is the easiest way to do that, for both the developer and the users.


This is a great point and I appreciate you pointing it out. Cross platform development is hard regardless of which part of the stack you're developing for. The reason development for the browser stands out for me is that when I build a web app I can control to a certain extent what environment the server is deployed in and then I don't have to worry too much about different platforms. However I have no control over which browser my user chooses to use and that can have a serious impact on their experience. An impact that I have to account for.


It's pretty easy to control, just detect which browser and throw up a screen saying that browser isn't supported or "use at your own risk." I don't see any distinction between cross platform support and cross browser support.


That is not support, that is denial of service and isnt an option for most webpages.


C has a much bigger baggage. I assume a big chunk of C is modern version of compilers is fairly gotcha free. But even the core basic part of Web is full of gotcha. Just the fact that you need to start with a normalize.css is reason enough. And as they fix compatibility in 1 things, they seem to add 10 more.

Maybe I'm wrong and clueless. But web seems to be a bastion for adding overly complex solutions while failing at trivial things often enough not always. I agree it is probably still the cheapest way to get a cross platform GUI at least maintenance wise. But it is a pile of electricity sucking horrendous crap regardless or at least a big part of it. More frustratingly I'm not sure that it has to be the case.


As someone who came from "native" gui development and has only really been doing web stuff seriously for a few years, it's pretty easy now, particularly if you don't need to support IE, and even IE11 isn't _too_ much of a pain. I've never used a normalize.css for example.


> From the get go it always seemed like the rules were arbitrary and involved lots of guessing to get things right. I know this isn't actually true but it felt true enough early on to poison my mind to it.

I guess you aren't into Deep Learning either ;)


Most programming/development environments/standards have arbitrary rules. Also most workplaces I've found.

It's arbitrary almost all the way down.


It is frustrating as here we are in 2019...er 2018 and structring a page is still kinda wonky.


Except for the fact that in 2018 it's actually not wonky at all.


Is it? That's disappointing. Who wins the world cup by the way? :-)


England


So you're saying other dev domains don't have intrinsic glitches and oddities? ;)


Forcing the DOM, JS and CSS was a mistake. The problem was that you couldn't have alternatives. wasm is going to disrupt front end by allowing you to have a strong typed language that you can use to build your own framework like the good ol' days.


Still gotta deal with the dom though. They need a root pixel level api to be made in conjunction to wasm, so people can restart from scratch.


Only once for the containing canvas. A direct access to the canvas pixel data and webGL offscreen rendering is all you need. With a few hacks, we're already 90% there. I expect by next year to see an explosion of this. There already are projects in Github that translated SDL and OpenGL to build web widgets.


And GoogleBot seems to follow Safari's rendering instead of Chrome: https://developers.google.com/speed/pagespeed/insights/?url=...


I wonder if that’s because they’re just running an older version of WebKit/blink.


"Chrome's" rendering in many cases is going to be whatever hardware accelerated stack (probably Skia + your platform's particular hardware API) is rasterizing the page. Headless rendering as found in a page scraper bot is more likely to be using software rasterization and might be using an entirely different API. Chrome has a few command line flags that can push everything through different rendering paths and you can get different results from it.


Blink (the HTML engine) is still going to push the same drawing commands to all the backends. Winding rule order stuff might be different if a backend is buggy, but things like this shouldn't be different.


I wouldn't be surprised at all if there's a drawing command that's having a negative number passed to it in a place where it doesn't expect negatives. I wouldn't be too surprised if different backends handled this "undefined behavior" differently.


Nah, this will all be handled within Blink. The drawing command will literally be told to draw polygons in the specific places in the coordinate space.


"shouldn't", but historically I've observed that different backends have different behavior for filtering (of bitmaps and layers) and for text rasterization. Wouldn't be surprised if geometry rasterization could be different too.


Maybe chrome/blink require too much ram so they used WebKit /s



I think one of the most optimistic things to notice about these two images is that Edge amazingly appears to be correct in both cases.


It's pretty clear that Microsoft really took a close look at the specs when developing Edge, which is really nice. It also has the benefit of being the newest of the major engines.


Actually, much of the time it comes down to what bugs people report. In a closely related area, IE and Edge both added spread distance rather than subtracting it on inner box-shadows until I reported it last year, then it was fixed in IE within a month: https://developer.microsoft.com/en-us/microsoft-edge/platfor....

In this particular case, they may have got the best of it because IE didn’t support outline-offset at all, and so when they implemented it they read the spec carefully.

Edge is a continuation of IE. Large swathes of the renderer have been rewritten, but when you do find bugs in Edge they’re commonly shared with IE.


(Correction: fixed in Edge within a month; IE doesn’t get such fixes. Also “IE and Edge both” should read “both IE and Edge” for increased clarity.)



Note that Florian is the editor of CSS UI, which defines the outline properties, where all this unspecified behaviour exists (mostly because browser vendors don't really care enough to make outline interoperable).


Is one of these correct?

Or is this combination of CSS properties ambiguously or un-defined by the spec in the first place?


The code in question can be mostly boiled down to:

  outline: inset 100px green;
  outline-offset: -125px;
The issue is:

- spec is deliberately liberal about how outline should look (this is kind of a similar case to forms and scrollbars, the styling of which varies not just by browser, but by OS/toolkit)

- inset is a keyword rarely if ever applied to outlines (and I'm not sure why it's even valid or what it's purpose would be)

- outline-offset is a useful property, with well defined behaviour in most cases: inset outlines are the exception where its behaviour is ill-defined

Edge's behaviour is probably technically "closest" to what I would expect, but I'm not sure if "correct" is the right term. It's an odd case, and what it should look like is debatable. IE's and Safari's could be "correct" either, and in fact IE's kind of looks truest to the intent of outlines as a thing.

Chrome is doing very bizarre things and I can't really fathom why. They look like rendering bugs.

Edited after a 2nd look, IE's and Safari's make a bit more sense after some consideration.


Fiddling with it:

Firefox's mistake is definitely that it isn't handling having outline-offset less than "-min(width, height)/2" in a sane way. (In the demo, width and height are both 100px; it starts do do funny things when outline-offset goes below -50px). That said, I'm not entirely sure that there is a sane way.

IE is totally ignoring the outline-offset property; it isn't implemented in IE.


From the person that wrote the code: "The specs are pretty vague about how this case should be rendered, but I think Edge handles it correctly here"


>The specs are pretty vague about how this case should be rendered, but I think Edge handles it correctly here


Pretty. I could see various renders used as a logo for each respective browser.


It blows my mind that problems like this exist, and yet platforms like Unity and Xamarin are capable of write-once-build-everywhere apps on every platform. Why hasn't web development abstracted away these problems yet?


Because many webdevs think that our grey bears and knowledge of doing native applications are worthless, and who does native anyway.

I look forward to native still winning on mobile for the years to come, with WebAssembly eventually becoming Flash's revenge.


I would think undefined behavior would be helpful in browser fingerprinting.


Using actual rendering for fingerprinting is a serious perf. hit. I'm not aware of any production setup using that.


Thanks for letting me know.


Except that it’s likely to change without warning.


It seems to me that this example is like undefined behavior in C.


At least Brave is consistent with Chrome, as they're both Chromium-based. Phew.


my Brave browser on mobile shows up as same as Safari one.


WebKit is the only renderer on iOS.


I trust edge more than any other browser.


What do CSS linters have to say about it?


“No”


Microsoft is the only one that gets it right.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: