The root issue is that the browsers are being asked to draw a line that's 100px in from the edge of a 100px box -- it's an impossible task, so there's no perfect solution.
This CSS test suite http://test.csswg.org/suites/css-ui-3_dev/nightly-unstable/x... suggests that browsers should treat it as though it were the largest offset that made sense at the current outline width, giving you https://codepen.io/pjwdev/pen/vrovPR?editors=0100
Safari achieves this if the border is the same colour the whole way around, but doesn't get the individual sides in a sensible arrangement.
Edge doesn't match the test, but it does another reasonable thing -- it takes the offset as a given, and uses the biggest outline width that makes sense.
Chrome and firefox...
> Negative values must cause the outline to shrink into the border box. Both the height and the width of outside of the shape drawn by the outline should not become smaller than twice the computed value of the outline-width property, to make sure that an outline can be rendered even with large negative values. User Agents should apply this constraint independently in each dimension. If the outline is drawn as multiple disconnected shapes, this constraint applies to each shape separately.
There's a lot of RFC 2119 SHOULD there, unfortunately.
He collected the wild guesses from various picture framing shops and posted them on twitter, getting a lot of attention on Hacker News as well.
edit: present tense.
 Not something I'm proud of and I know front end development is serious business so don't @ me.
A confluence of vague definitions, poor implementations, bad documentation ... and this is the web we have. It's the tip of the iceberg, and the painful life of anyone doing html5 work, it's 'not fun' for anyone who's been exposed to regular programming because the problems you spend your day solving mostly should not be problems to begin with.
My first instinct is that a new platform should be developed based as far as much as possible on tried and tested languages and technologies, but one that also allows for fast-paced browser dev and a path for future evolution.
For now I’m just embracing it and learning. It’s a lot of fun to see how the web works nowadays - I’m already beginning to understand some of the types of bugs that sites exhibit, and why designs are as they are. One of the outcomes has been to develop serious respect for the developers of MS and Google Web apps - eg OneNote on the web - as what they’ve achieved is pretty miraculous when you consider the stack. And those poor web browser devs!
 the idea of allowing arbitrary dimensions displays is a very strange one if you consider traditional graphic design; this may be an instance of us geeks making decisions that don’t make sense to more experienced professionals from other backgrounds. How much did graphic designers, UX experts, psychologists etc impact the evolution of the web standards? I don’t know.
They didn't. It's just a mess.
We want to believe there is genius in the commons, but really I don't think there is a lot of coherence. It's just some small group thought it would be good to do this or that, and it made sense at the time.
IME 70% of the problems people run into are self-inflicted due to overengineered solutions. I spend a lot of time lurking front-end IRC channels and most of the problems that get posted is stuff that I have never had trouble with, with libraries I'd never even consider using, that I don't have the first clue how to fix without going back to square one and re-engineering the solution from the ground up. The advice given by people that do attempt to help with these problems is often awful and creates more confusion.
For the most part, I don't think people realise that libraries add to the complexity of their software, rather than abstracting it away. Every front-end dev desperately needs to go and read the Law of Leaky Abstractions, because they're getting smoked by it on a daily basis and they don't realise.
The other 30% of problems come down to what GP mentioned, ambiguous specs, browser inconsistencies etc. Stuff like the relationship between x-overflow and y-overflow, little flexbox niggles like different behaviour depending on if flex-direction is set to row or column etc etc. This stuff can often be worked around with simple but unintuitive fixes (usually some fuckery with negative marigns and such), the two difficulties lie in discovering the solution to begin with (there's boatloads of bad and outdated advice everywhere), and running into these solutions when they haven't been properly documented (usually the case if some junior programmer has just ripped the hack off SO without understanding the theory behind it).
The secret to being good at front-end dev is to develop strategies to handle these problems, and to pick your resources very carefully (much as you would in any other ecosystem beset with noobs, like PHP back in the day).
I believe (based on my experience) that it's possible to build very successful software with very horrible engineering. Good and successful aren't necessarily synonymous in the world of big VC money. A great example is this: I need two hands to count the critical failure bugs I've run into with Uber's products in the last year. By critical failures I mean things that have flat out stopped me from being able to use it. I strongly suspect (a subset of) their engineering sucks (sorry to anyone here that works for them!). But hey, I don't own a squillion dollar company so what do I know? ;)
The core feature of React is that your UI is a pure function of your application state. In (e.g.) the Win32 UI, you would receive events, and then need to manually transition your current UI state to the new UI state (hide this button, disable this input, etc).
Similar idea was used on Oberon System 3 Gadgets toolkit.
Also java.util.Observable based on the same Smalltalk ideas was already part of Java 1.0.
This is markedly different than React, where the framework diffs the desired UI state (which is a pure function of your application state), and the actual UI state. The minimal number of UI operations necessary is then applied to make the latter match the former.
ImGui above works the same way, which makes it very popular for devtools in game development.
The source code of quite a few ones is available around the Net.
Building technical debt for the future I tend to think, which I guess is good for keeping future devs employed.
One of the goals of all the web related committees for more than a decade has been to ensure that there is no undefined behaviour. Undefined behaviour is exactly the reason we used to have to have different versions of a webpage for every browser.
The only intentionally remaining ambiguities are in areas where there is significant existing content that depends on mutually incompatible behaviour, eg: content that does one thing in browser A, and another in browser B, but is using the same API in each path yet expecting different behaviour. I can’t recall any specifics off the top of my head any more, but a classic has always been for(in) enumeration along the prototype chain when properties are being added and removed. @dbaron or @brendaneich (that his account? I can’t recall) may recall some of the others. Prefixes (that everyone on HN hates) solved this problem - it makes it possible to keep that old non-spec API around, without permanently screwing up the final specification (every points to those old -webkit-gradient properties, but forgets that that api was much clunkier to use than the final gradient spec. If the prefix hadn’t existed then the webkit api would be the permanent one that everyone would have to use)
The other big one where there’s specified behaviour that is allowed to vary is webgl. I was kind of responsible for the hated “only support the minimum subset” rules of version 1, but I believe modern specifications allow an implementation to vary according to hardware. The only other place in webgl that I made people (gpu engineers mostly) unhappy was not allowing undefined behaviour when accessing beyond the end of an array in the shader - there were voices that literally wanted to allow this to be undefined for perf reasons even when shown you could read data from other execution environments (this was mind blowing). The compromise in the end was allowing an implementation to return either zero, or some other constant defined value (I think this would fall under the C/C++ “unspecified” behaviour banner).
Anyway, in general undefined behaviour is bad on the web, and all web spec authors try to ensure that every edgecase is covered explicitly so that we never get stuck back in the terrible 2000s again.
Without actually digging into it (I'm on vacation damnit), I'm pretty sure all renderings are valid per CSS 2.1 (though I don't think they are per CSS UI 3 or 4? but maybe that's still a SHOULD level requirement given people wanting to publish as REC ).
The CSS WG has been pretty poor at actually tightening up specification for old features (on the face of it, due to lack of editorial time, which really points at a separate problem).
(Also, given the for(in) example, as I'm sure you know but for the benefit of others, note that iteration order of each object on the prototype chain is still undefined, and not interoperable.)
Some other good examples of things that have been standardised over the past decade: HTML parsing(!), what specific DOM exceptions are thrown all over the place, when the various document load events fire.
But behavior when you deleted an own property that shadowed a property on the proto chain, and what happens when you add such property back.
Also the historic spec mandated integer properties on regular objects were iterated in insertion order as well, but V8 chose not to implement that as doing so broke their "Indexed properties are just an array" semantics. Because of the huge perf impact other browsers had to go down that same path.
I was thinking that "undefined behavior" in a spec is a serious misnomer. It's paradoxical if anything is defined as undefined, and it lets you confuse it with actually undefined behavior.
If something is in the spec as undefined behavior, it's my understanding it could legitimately lead to, say, global thermonuclear war. No guarantees.
But if something is not in the spec at all, say the effect of having your "const" keywords bright blue, then the spec is not sanctioning the end of the world happening as a consequence. The entire world provides constraints of reasonableness and other relationships that must hold.
This seems like an important difference! The latter is what I would call literally undefined; the standard version of undefined is (a) inherently defined as undefined, and (b) falsifies itself, because nobody would actually use a compiler that could cause the end of the world, ever.
The discussion of undefined behavior and the concomitant blaming of those who inadvertently invoke it have gradually come to be associated in my mind with the blaming of people for self-driving car crashes. They may not be related exactly, but homomorphic. It seems like popular and gradually spreading pathology in human thinking. Maybe it could be called "computism" after "corporatism".
I guess maybe the issue is that the world is mostly composed of "unknown unknowns" and it is incredibly attractive to a lot of programmers to declare them Somebody Else's Problem. But I'm tentatively suggesting you can never just disclaim those "unknown unknowns" and therefore you should never fall into the trap of trying to define the undefined. It's a vast ocean, not a few drops of water you can build a fence around. Unfortunately this sounds, even to me, like vague philosophical meanderings, but I strongly feel that there is something like an intellectual prion disease out there, and that impression has been building ever since I was browsing comp.lang.c.* in the 90s.
Rust aims to solve these problems with something called zero cost abstractions.
The worst part of these frontend issues is, ultimately, the UX can become inconsistent, if not mess. Given that nearly everyone interfaces via the visual UI, you'd think we'd have this sorted out by now.
That said, the browser's impact on UX is essential. A lot more people use browsers than compilers. You'd think the UX would matter by now.
For example, a race condition is an UB. Compiler won't invent locks that were not specified by the programmer in a code with some raw threads. As a result, CPUs will trash the memory.
You have to go to considerable effort to come up with a case like this.
And this exposes the problem with the current state of the art: yes, you can probably get flexbox to do what you want [@] if you know how. But if you don't know how, trying to figure it out is still very hard. You should be able to write:
[@] What if I want to put something in the lower left or upper right hand corner?
This is a gross abuse of html, but if you really want that syntax, you can have it.
I can't say I understand all the complaining here. You want to make $200K a year sitting at a desk, but learning flexbox or googling when you need it is too much effort?
Im tired of programmers who've become experts in building structures by ramming screws into wood with a hammer claiming that all is well with the world when you have powerful hammers and screws.
I think we have a whole generation of front end programmers who haven't seen a screw driver.
And my complaint is that I believe that computing should be accessible to everyone, and that simple things should not be made difficult just so that an elite guild can command above-market wages to fix problems that they themselves created. I don't believe in broken-windows economics.
But at least you are entertaining.
Who first tested and implemented CSS? Browser programmers.
These days it's under the control of committee. I think I would disagree with the notion that CSS devs are the ones to blame in this case. Sure, there's lots of blame to share for the problems. To imply that CSS developers intentionally forced it to be complicated for purposes of job security is laughable.
After all, CSS is not C++.
Just like <center> works, but vertically. You don't need <foo><bar>content</bar></foo> for horizontal centering. I consider it a layering violation to have to do that nesting.
You're always centering children with respect to something, and you specify the fact that you want children centered on the thing you're centering them in, rather than the elements themselves. This makes more sense, in my opinion, and as said in the sibling comment, it's two simple rules.
I do a tonne of flexbox and grid in my day-to-day, and these complaints are quite simply out of date. Almost any layout in CSS these days can be done easily and intuitively.
If you are learning CSS in 2018, then there is little chance that you didn't encounter flexbox and its capabilities. Nowadays there are many good, crossbrowser solutions for common problems, but I have to agree, intuition is not a feature of CSS.
If you want to lower left align you set align-items and justify-content to flex-end and flex-start. For upper right, flip those. (if you’ve changed flex-direction those will need to shuffle around.)
I seriously don't understand why having a different tag for every possible alignment would be somehow easier or more discoverable than just using css rules. Novices are going to have to be looking stuff up either way- I get the concern about discoverability but I strongly doubt anyone is just typing tags to see what works.
I would wager, if your goal is to make as wide-spread, cross-platform GUI as possible, a Web GUI is the easiest way to do that, for both the developer and the users.
Maybe I'm wrong and clueless. But web seems to be a bastion for adding overly complex solutions while failing at trivial things often enough not always. I agree it is probably still the cheapest way to get a cross platform GUI at least maintenance wise. But it is a pile of electricity sucking horrendous crap regardless or at least a big part of it. More frustratingly I'm not sure that it has to be the case.
I guess you aren't into Deep Learning either ;)
It's arbitrary almost all the way down.
In this particular case, they may have got the best of it because IE didn’t support outline-offset at all, and so when they implemented it they read the spec carefully.
Edge is a continuation of IE. Large swathes of the renderer have been rewritten, but when you do find bugs in Edge they’re commonly shared with IE.
Or is this combination of CSS properties ambiguously or un-defined by the spec in the first place?
outline: inset 100px green;
- spec is deliberately liberal about how outline should look (this is kind of a similar case to forms and scrollbars, the styling of which varies not just by browser, but by OS/toolkit)
- inset is a keyword rarely if ever applied to outlines (and I'm not sure why it's even valid or what it's purpose would be)
- outline-offset is a useful property, with well defined behaviour in most cases: inset outlines are the exception where its behaviour is ill-defined
Edge's behaviour is probably technically "closest" to what I would expect, but I'm not sure if "correct" is the right term. It's an odd case, and what it should look like is debatable. IE's and Safari's could be "correct" either, and in fact IE's kind of looks truest to the intent of outlines as a thing.
Chrome is doing very bizarre things and I can't really fathom why. They look like rendering bugs.
Edited after a 2nd look, IE's and Safari's make a bit more sense after some consideration.
Firefox's mistake is definitely that it isn't handling having outline-offset less than "-min(width, height)/2" in a sane way. (In the demo, width and height are both 100px; it starts do do funny things when outline-offset goes below -50px). That said, I'm not entirely sure that there is a sane way.
IE is totally ignoring the outline-offset property; it isn't implemented in IE.
I look forward to native still winning on mobile for the years to come, with WebAssembly eventually becoming Flash's revenge.