Hacker News new | past | comments | ask | show | jobs | submit login
Languages Which Almost Became CSS (eager.io)
586 points by zackbloom on June 28, 2016 | hide | past | web | favorite | 133 comments

Layout should have been constraint-oriented, not procedural. You should be able to express "Bottom of this box is adjacent to top of that box", and such. All those constraints go into a constraint engine, and a layout is generated. This is something a WYSIWYG editor can generate.

To get a sense of how this could work, try sketch mode in Autodesk Inventor (there's a free 30 day demo) You can specify that a point must be coincident to an edge, that an edge must be coincident to an edge, that something must have specified dimensions, that some dimension must have a numerical relationship with another dimension, etc. Inventor goes further, supporting constraints on diagonal lines, circles, arcs, ellipses, etc. Whether layout should support curves is an interesting question, but the technology exists to make that work.

The people who designed HTML5 and CSS thought procedurally, not geometrically. It shows. Designers think geometrically, and a geometric constraint system would make sense to designers.

Constraint layout systems are O(n^2) algorithms, while a procedural layout algo can be O(1) or O(n). I think out of practicality it wasn't adopted. It would be better if both were available, but such is life.


It's used for screen layout in MacOS Lion and Grid Style Sheets.[1] There's also a subdomain that goes faster, and simple layouts should be in that subdomain.

The idea is to get away from programmer-oriented layout input.

Right, but neither Lion or Grid Style Sheets were available in the early 90's. It may have been possible to implement, sure, but we're talking early enough in the web's history that the performance of the styling engine was a legitimate concern.

Sure, but now we're past all that and people still come up with bullshit reasons that CSS SHOULDN'T be replaced when there is every reason to believe that we can do a better job now that we've suffered through 20 years of learning.

"Constraint layout systems are O(n^2) algorithms, while a procedural layout algo can be O(1) or O(n)."

Those are the upper bounded constraints, but not reality.

Pragmatically, there's no reason 'constraint bounded' algs can't - in practice - approach procedural layouts.

Second, is the fact that layout is becoming less and less of an issue re: performance, to the point that in a few years, the argument may be moot.

Third, I'd argue that 'performance' is only one factor.

But good point.

I wish iOS autolayout wasn't n^2. You have to realize these algorithms have very small time budgets and you can start seeing stutters when you use them in things like UICollectionViews and UITableViews and other scrolling / gesture contexts. Performance is still a very real thing for these things.

Devices you support will also last 5+ years and I don't think we will see the same power/performance ratio gains in mobile in the next 5 years like we have in the last 5 years. I bought an iPad Mini 4 recently and I see general stuttering in the OS programs alone and it's the most recent iPad Mini you can buy!

You can have something pretty close to constraint based layouts in procedural layouts in something like "snap view 1 bottom to view 2 top", etc.

Some systems can be linear and solved in real-time.


> The people who designed HTML5 and CSS thought procedurally, not geometrically. It shows.

The problem the article points out is that they wanted styling to happen, before the entire document is downloaded. Obv. you can't say, "hey, this box should be pinned to the bottom of the footer" if the HTML for the footer isn't yet existing in the stream that's downloaded.

I think enough people fume when they see the Flash of Unstyled Content. I don't know if constraint-oriented would work. I (and I think you) would LOVE it to be that way, since css gets ridiculously weird, quick.

Surely you can say that -- that constraints are declarative means that resolving them doesn't have to happen in a particular order. The solver could stash away the constraint involving an element that doesn't exist yet, until it does.

If this sometimes moves elements that are already displayed, that'd be a problem. Isn't it a problem we're seeing anyway?

It sounds like people are forgetting that most users were on dial up during the 90s.

You are free to implement anything you like in JavaScript. There are several implementations of this type of constraint layout, but none have ever really taken off.

There is always Grid Style Sheets[1] if you want to try it out. It is based on the same algorithms as iOS constraint engine[2] AFAIK. Unfortunately it has to compute the constraints using javascript as its not native.

[1]: https://gridstylesheets.org/

[2]: https://en.wikipedia.org/wiki/Cassowary_(software)

WebAssembly should be able to help with that, no?

I think so, yes. Personally I'm really looking forward to seeing what people are going to implement with WebAssembly. Another possibility would be virtual DOM diffing.

What you mean is called "parametric modeling" with constraints, it's very common with 3D CAD and not exclusive to a specific software package: https://en.wikipedia.org/wiki/Solid_modeling#Parametric_mode...

It's implemented in the "geometric kernel" (similar to operating system, the kernel is the hearth of a 3D CAD software), and only a few companies develop such CAD kernel which are licensed by many CAD software companies: https://en.wikipedia.org/wiki/Geometric_modeling_kernel

Of course; everybody has parametric modeling now. Inventor is worth looking at because they have a nice GUI on top of it. Designers would be a lot happier with that kind of GUI than trying to write CSS to get the layout they had in mind.

CSS layout is slow enough as it is. Adding all the features you propose here would make the situation far worse.

The fact that CSS layout mostly follows a top-down (and parallelizable!) width assignment phase followed by a botton-up height assignment phase is not something to throw away lightly.

Do you have a reference for this?

DesignGridLayout is based on canonical design grids.


The heuristics are hand coded.

I created DGL because I wanted visually correct, easy to create UIs. Here are some usage examples. (I probably won't ever design a "fluent API" (method chaining) ever again. A DSL would be better.)


Next time I do interesting UI work, I'll port DGL to that platform. I won't wrestle with constraints solvers again for anything but the most trivial efforts.

Mea culpa: I use Bootstrap for web stuff. Good enough for trivial UIs. Last time I checked, a few years back, I couldn't figure out how to use CSS to align text baselines between columns, or how to space those baselines vertically equally.

I never tried it, but I've heard that Solid Edge 2D Sketch mode is free, and not a demo (it: you can use it all that you want).


> Layout should have been constraint-oriented, not procedural.

The problem is that there isn't a universal set of constraints that satisfies everybody's requirements. For procedural specs, it only has to be Turing complete to make everybody happy.

Well recent versions of CSS and HTML are limited by backwards compatibility. 20 years ago it was taken for granted that styling meant applying decoration, fonts and colors to a text, not building an application or a design tool.

The whole world would be better if thinking this way was applied every where. Not only client side rendering. Even programming languages themselves, even OSes (hi HURD~). It's slowly coming but society has bad eye sight and ADHD.

> Contrary to popular perception, Mosaic was not the first graphical browser. It was predated by ViolaWWW

Neither was ViolaWWW the first grahical browser.

In fact the very first browser by Sir Tim Berners-Lee was already a graphical browser (even with WYSIWYG edit mode later known from Frontpage/Dreamweaver) - made possible by thr advanced NeXTSTEP operating system and its window builder IDE (nowadays known as OSX/macOS and XCode respectively): https://en.wikipedia.org/wiki/WorldWideWeb and https://en.wikipedia.org/wiki/NeXTSTEP

When people talk about "graphical Web browsers" in a historical context, they're not just talking about browsers that run in a GUI. The term specifically refers to browsers that can display graphics and text in the same window. The earliest versions of WorldWideWeb (Tim Berners-Lee's original browser) couldn't do this. It used NSText (NeXTStep's native text-display object) for displaying pages, and at the time NSText didn't support inline images, so WorldWideWeb relegated images to separate windows at the time. It did eventually become a graphical browser in this regard, once NSText had the necessary inline image support, but ViolaWWW beat it to the punch.

You don't hear much about graphical browsers anymore, not because they died out but because they became such an overwhelming majority that it no longer made much sense to make any distinction. But a few non-graphical browsers are still in development; Lynx is probably the most famous of them.

Is the original source code that Tim Berners-Lee wrote available anywhere? I would think it would be an interesting read.

Yep! http://browsers.evolt.org/browsers/archive/worldwideweb/NeXT...

Interestingly, some of the code "still resides on Tim Berners-Lee's NeXT Computer in the CERN museum and has not been recovered due to the computer's status as a historical artifact."

Thanks! That IS great to check out. It took him many rounds of proposals at CERN to finally be able to realize his ideas. The details of it all is an epic read.

In Aalto Uni there is popular legend about Erwise.


I don't know what to believe.

DSSSL looks amazing. Truly a shame it didn't catch on. Maybe we'd have a client-side Lisp instead of Javascript too.

Can we have a do-over on the web? I've seen (and even nominally participated) in something of a resurgence of Gopher as something of a shadow web for nostalgia-addled geeks. Gopher isn't an awesome protocol, but I love anything that recaptures the spirit and decentralized nature of the old web.

At my curmudgeon-iest I like to imagine we carve off our own web where S-expressions reign, there is no Google, and a million timecubes bloom.

That may yet happen.

Although CSS sucks more than a very sucky thing, and the text+markup idea barely makes sense in 2016, I can see how we got here - and how CSS needed to hit the classic "worse is better" sweet spot that allowed designers to play, and not just people who code.

But that was then. If someone invents a meta-protocol now and implements a meta-browser/meta-server for it, I'd expect cult/niche status at least.

And such a web would be far slower. Restyling is usually the slowest part of the rendering pipeline in an optimized implementation. Throwing arbitrary tree transformations made with a Scheme interpreter into the mix would be totally irresponsible without looking at what the consequences would be for performance.

Isn't it the way Qt + QML works?

Last time I checked, this stack wasn't particularly slow. I would also express the same style in much less lines of code than HTML + CSS.

When I was 6 years old, in 1996, my parents gave me a book about "Cyberspace" to go along with our dial-up internet connection (on our Pentium 100mhz Windows 95 computer!). This book showed me Gopher, and MUDs, and changed my life! I wish I could remember the name of it, it was a bright yellow cover with a superhero on it, I think.

I miss Gopher, in a rose-tinted glasses nostalgia-driven way

> I wish I could remember the name of it, it was a bright yellow cover with a superhero on it, I think.

Could it be Computer Lib/Dream Machines by Ted Nelson:

Cover of the 1974 edition: http://blogs.brandeis.edu/sarahw/files/2011/03/cover1.jpg

Front cover of the 1987 edition (in yellow): https://www.amazon.co.uk/Computer-Lib-Dream-Machines-Tempus/... (sorry, I have no image of the back cover, but I have both editions in my book shelf: Also the 1987 edition shows a superhero on its back cover (cover of Dream Machines)).

EDIT: If you want to order a reprint of the 1st edition (very difficult to get): Ted Nelson sells them again: http://hyperland.com/LibPage

I'm pretty sure I had the same book! The "superhero" was called "CyberSarge", yes?


YES!!!! That's the one! Holy crap, well done, I never thought I'd see it again! Oh man what a nostalgia trip:

"GEEK: A geek is someone who is really excited by computers and proud of it" -- this is the reason I was proud to call myself a geek from 6yo onwards :)

Really, all you need to do is make your own browser and then popularize it.

1 step ahead of you . . . but this staircase seems really long . . .

The engineering management that prevented Scheme in the browser should never be forgiven: https://brendaneich.com/2008/04/popularity/

In some alternate universe the management didn't stop Eich:

Scheme became the lingua franca of the web. Smug Array Weenies criticize Scheme for being “insufficiently APL-like”, while pure functional programmers criticize APL for being “insufficiently Haskell-like”. Legions of Scheme programmers write mutually unintelligible code, though still manage to criticize Haskell for being “insufficiently Lisp-like”. Prolog fails to catch on. Brenden Eich announces that continuations are considered harmful. C programmers maintain that continuations are the most reasonable way to handle errors in C. This happens in spite of C not having continuations. Douglas Crockford invents s-expressions as a data interchange format. Douglas Crockford writes “Scheme: The Good Parts”. Guy L. Steele sues Douglas Crockford for copying the Scheme standard. Hipsters begin using Node.Scheme to write their backends and are criticized for a nonsensical package management system and “using a web language on the server”. This happens in spite of Scheme having been a server language before it was used on the web. Microsoft invents a compiler with type inference and optional type annotations for Scheme. Facebook invents a compiler with type inference and optional type annotations for Scheme. GNU Guix is promptly ignored for “using a web scripting language for package management”. Facebook invents React, which uses an extensive set of macros to write HTML components in Scheme. This is criticized in spite of having been done a few hundred times before. Guy L. Steele angrily removes string-pad-left from the Scheme standard. This breaks many packages. Non-Scheme programmers laugh and wonder if Scheme programmers have forgotten how to program, suggesting that “proper design minimizes dependencies on the language standard”.

In all seriousness, Scheme would make a really good lingua franca.

Wonderful write-up, you deserve a HN Gold. Umm, sorry wrong site, but still, I rarely stumble upon such a witty comment in my space.

That, or it ends up with several different Scheme flavors. And you get "Works best with MSScheme and IE" in 2016.

Scheme in the browser would have been fine. Scheme in the hot path of every single restyle, not so much.

I don't follow you. If, say, browsers used Scheme rather than JS, they wouldn't be any slower. If a CSS-equivalent used s-expressions but was otherwise CSS, that's just syntax.

Why not? Modern JIT compilers generate very fast code.

"Very fast" in this case is not enough for restyling. It is incredibly performance sensitive. A 2x slowdown over the optimized C++ (for example) would be unacceptable.

When I die, my first question to god will be 'sir, why the hate on lisp?'. I expect the answer to involve testing of some kind.

My take is that metaprogramming does not scale to teams.

Or maybe it's just the historical artifact that everybody claims to be, and if the PDP-10 supported a lisp environment, things would be different. But if I had to bet, I'd bet on metaprogramming not scaling.

> whether we like it or not, each project effectively creates its own dialect of the language


Each project has its own coding guide and idiosyncrasies. That is not restricted to Lisp. Macros allow you to express some rules in a domain-specific language, not in a separate document or tacit knowledge. This is more manageable than ad-hoc approaches. Abusing macros is definitely bad and can lead to ghetto-languages.

Lisp is interactive, call "describe" on whatever you don't understand. Under Emacs, if you encounter a form you don't understand, point to it, "C-c C-d d" and you get the documentation; "M-." and you go to the definition; "C-c M-m" and you call macroexpand.

This is pretty much exactly the argument made by The Lisp Curse:


That basically every program written in a Lisp becomes its own programming language. It has mechanisms which are vaguely-familiar-but-different from all the other programs. The mechanisms are 80% of a complete solution, but a different 80% each time, depending on the needs of the program.

Anyone can write js.

Some humans can write Lisp. But Lisp production code can only be maintained and updated by AIs who are more intelligent than we are.

Ruby and Clojure are doing decently (though obviously not as popular as say C and Java) and have pretty advanced metaprogramming.

For that matter C++ templates are metaprogramming from what I understand, and C macros are a really really primitive form of metaprogramming. Java also has reflection which is metaprogramming-ish.

I don't think the problem was metaprogramming.

When I see people talking about Closure, I don't see them talking about the power they get from metaprogramming. It's almost the opposite, people talk about the safety of FP, and power from Category Theory.

For C++ templates, you get that impression and multiply by a few googols.

Now, Ruby and Python communities do claim to gain power from metaprogramming. I can say that Python metaprogramming is something completely different from Lisp's one, but Ruby's is more similar. Yet, both communities put a hard limit on the amount of "magic" you should use on your programs (where Ruby's limit is way higher than Python's), so it can not become the panacea that it is in Lisp.

Metaprogramming is one of the main selling points of Lisp, the only other being how easy it is to implement. Other languages have other features to show.

What people talk about category theory w/r/t Clojure? Since it has almost no type system to speak of, I'd be very curious to know what in the world they're on about.

Macros and the repl were, to me, the best part of clojure.

I see people talking about monads and applicatives all the time. I don't know Clojure, but I got the impression it has a powerful implicit type system.

Not the same level of metaprogramming. Ruby Python try but it's less idiomatic or batshit crazy. C macros are glorified sed scripts, you're at the byte buffer level; it's cute but not good either. To me lisps sit at a nice middle ground where metalevels are as close as possible while still being usable. That's my take based on what people said in the 60s, after all sexps weren't supposed to be used.

Metaprogramming scales very well. Every high-level program + compiler is metaprogram for assembler code.

That has a much higher barrier of entry, though. If I work on a project written in Blub, I won't be able to do any metaprogramming unless I can get everyone else to adopt my Blub++ compiler. If the software is written in Lisp, all I have to do is get it through a code review.

In effect, it's not really scaling; it's being limited to the small portion of developers who work on compilers.

Possible that most people really like to have distinct level encoded as syntax.

I love Lisp, and I love s-expressions, and I honestly can't viscerally understand why others don't. From listening to my teammates' comments, I think that it has something to do with how they register visual patterns in code. Lisp really does seem to be lots of irritating silly parentheses to them: where I just see sculpted code-shapes, they see a mess of undifferentiated parens and symbols.

I wonder if there really are mental types well-suited to Lisp and types ill-suited to it.

It also comes from which system they used. Writing lisp outside of supporting editors is indeed a nice pain. People prefer to think in terms of blocks of lines rather than (nested (groups)). Culture shock.

Here's your answer: https://xkcd.com/224/

Maybe not such a shame. Having something (mostly) static like CSS has lots of performance advantages. Imagine the web being even slower than it already is, that's what DSSSL probably would have resulted in.

I'm mostly in favour of the "principle of least power", but I think the requirements for CSS would be a good fit for something like DSSSL:

- Designers/developers will, as a general rule, always max-out the system; as long as the performance is at an acceptable level, the features and bloat will increase. If it becomes unacceptable, those features and bloat will be trimmed accordingly. Hence, no matter the underlying technology, performance will almost always hover around "barely acceptable". The difference would be how much "bang for the buck" we would get for that performance; presumably a "barely acceptable" page using CSS would be capable of more than a "barely acceptable" page using DSSSL, since DSSSL would exhaust the performance budget more quickly.

- Trying to dictate which stylistic elements can/cannot be used seems like a thankless task, since many will disagree and either come up with awkward workarounds or lobby to get their desired features included (which may or may not disrupt the coherence of the provided elements). Providing a full programming language is effectively pre-empting those workarounds, and giving the community control over the available elements (e.g. via libraries). This would lead to lots of awful code, but some good ideas would emerge and become widely adopted. Browsers might specialise their evaluators to speed up common usages, etc. Very similar to Javascript, polyfills, etc.

- CSS is, after all, "just" styling information, and is applied progressively on top of HTML. The document is still machine-readable, even if we might not be able to answer particular questions about its layout and visuals. It's conceivable that some people might, for example, obfuscate their document content, and re-assemble them using styles, e.g. to prevent crawling; that's more of a cultural/social issue than a technical one though, and that cat's already out of the bag with Javascript, single page apps, etc.

In any case, the current trend of working around CSS's limitations with Javascript is the worst of both worlds. At least we might attempt to evaluate DSSSL, to see what it might look like, whilst any attempts to evaluate Javascript will quickly run into barriers like side-effects (should we run AJAX calls? What should "alert" do? etc.)

Well, slower and then faster. Virtual selectors mean that CSS is now also Turing-complete (in a horrible, horrible way). And DSSSL would've been able to handle much or maybe even all of what's currently handled by Javascript.

CSS is only Turing-complete in pathological cases. Pathological cases do not make a good basis for policy decisions. In reality, the loss in performance from losing the style sharing cache alone would probably swamp any gains you'd get from DSSSL.

The point is virtual selectors and using Javascript for styling both have very poor performance characteristics, and both are A. ubiquitous and B. hacky kludges driven by the excessively limited nature of CSS as originally designed.

I'm probably out of my depth here, but what do you mean by the style sharing cache? I mean, any kind of file can be cached by the browser, including Javascript. And presumably browsers could've implemented optimizations (with or without the aid of hinting annotations) to identify those DSSSL functions that need only be evaluated once.

(BTW, like the author of the article seems to, I actually think that PSL looks like the best of the CSS alternatives he lists).

> And DSSSL would've been able to handle much or maybe even all of what's currently handled by Javascript.

I severely doubt that. JavaScript can inspect attributes of anything on the entire page, run remote HTTP queries and inspect their attributes, and make styling decisions based on those, on the fly, and in response to user GUI events.

Granted, I'd probably be really okay with these features no longer existing. Just saying, I don't think any DSSSL could have taken them on.

I should've been more clear — DSSSL might've been able, possibly in conjunction with later extensions (just as XJAX was not originally part of Javascript), to provide a similar end-user experience in many cases as CSS+Javascript currently does, albeit with very different semantics for the developer.

Not sure it would be better (it probably erred on the side of too much mixing of program logic and styling rather than too little), but it would certainly be different.

Stylesheets would be smaller, and only execute slightly slower on the client due to the expansions needed. Trading rendering speed for network speed is a good choice.

Not only do I think the stylesheets would not end up appreciably smaller after gzip, I also disagree with that argument. Rendering performance (animations, scrolling, etc.) is one of the key reasons why native is perceived to be winning vs. the Web. Making the Web even slower at rich interactive apps is not doing the Web any favors.

> Rendering performance (animations, scrolling, etc.) is one of the key reasons why native is perceived to be winning vs. the Web.

Except this is an argument against JavaScript driven behaviour, not against a restricted styling language slightly more expressive than CSS. The fact that we would be able to move some of this dynamic behaviour from a general purpose language where optimizing redraw is difficult, to a domain-specific language where optimizing redraw is loads easier would yield performance improvements, not regressions.

> The fact that we would be able to move some of this dynamic behaviour from a general purpose language where optimizing redraw is difficult, to a domain-specific language where optimizing redraw is loads easier would yield performance improvements, not regressions.

No, it wouldn't. "Optimizing redraw" isn't difficult, and to the extent that it is it has nothing to do with the expressiveness of CSS.

You've seemingly ignored the main thrust of my point in order to quibble over what I meant by optimising redraw. Seems dishonest at best.

Do you agree or disagree that a domain-specific layout language with would be faster to render and animate than a general purpose programming language interacting with the DOM? This seems like an undeniable yes.

Do you agree or disagree that such a layout language could supplant some of the uses of JavaScript over the years? This too seems to be an undeniable yes.

So it seems undeniable that modern browsers would be faster than they currently are on the metrics you criticised them as compared to native apps, which seems to be your primary concern.

And you can claim gzip is good enough to eliminate any space savings a more expressive language would yield, but the fact is people still minify their JS and CSS for significant savings, which means even small differences matter; further, domain specific optimisations have significant effects, and a precompiler could perform advanced common subexpression elimination passes to further compress beyond what gzip could dream of without affecting the semantics your layout.

So theoretically and empirically it seems my point that trading off rendering speed for network speed is not only well motivated, but already settled in my favour.

> Do you agree or disagree that a domain-specific layout language with would be faster to render and animate than a general purpose programming language interacting with the DOM? This seems like an undeniable yes.

No. I don't believe this is true. With CSS as a declarative language, we can do global optimizations that are much harder to do than with a general programming language (especially one that's as hostile to static analysis as Scheme!)

> Do you agree or disagree that such a layout language could supplant some of the uses of JavaScript over the years? This too seems to be an undeniable yes.

Sure, but that's not worth slowing down so many Web sites for.

> And you can claim gzip is good enough to eliminate any space savings a more expressive language would yield, but the fact is people still minify their JS and CSS for significant savings, which means even small differences matter

Sure, but it's not worth trading off the rendering performance.

> a precompiler could perform advanced common subexpression elimination passes

Not with Scheme, it sure can't! You can do those dynamically, but not statically.

DSSSL is not Scheme, it's a domain-specific language, which solves nearly all of your objections that aren't conjecture. Domain-specific languages are obviously much more amenable to optimization. Such a stylesheet language can be trivially cache JIT compiled stylesheets, and because they're more expressive, you'd see more reuse across pages, and possibly even across sites. The bandwidth savings are non-linear.

A DSSSL implementation on desktop still exists : openjade [1]. Could probably be ported successfully to JS today. Heck, someone could try to compile it with Emscripten.

[1] http://openjade.sourceforge.net/

DSSSL was amazing. Mr. Clark did some truly outstand work, and I benefited from using it with SGML and FrameMaker back in the day.

It was hard to learn, though, and the documentation wasn't good.

Looking at the structure, the DSSSL language looks reminiscent of modern-day Sass.

There was also JavaScript Style Sheets: https://en.m.wikipedia.org/wiki/JavaScript_Style_Sheets

That's a good point, I'm working on adding something on that now.

Edit: Done, should be live shortly!

Nice work!

Ha, I'd totally forgotten about that! It seems like I remember Netscape 4 would actually convert CSS into JSS internally. It tried to comply to the standards but it always felt like the internal warts of the engine would poke through in myriad ways. Which was annoying because at Sun we had to make things work in Netscape 4 long after the rest of the world had moved onto IE6 and Mozilla.

"HTML is the kind of thing that can only be loved by a computer scientist. Yes, it expresses the underlying structure of a document, but documents are more than just structured text databases; they have visual impact. HTML totally eliminates any visual creativity that a document’s designer might have. — Roy Smith, 1993"

Seems like an odd request in 1993. Sure, Prodigy had visual impact, but it was pretty hard to read. HTML's starkness seems in part due to the fact that styling options were limited by the technology and the ones that existed were easily abused.

Writing long sentences as headers (<h1>) or all-caps (caps-lock) were styling options at the time... and often misunderstood and abused.

You also have to think about who the audience was (Unix users) and what they were used to (Gopher, irc, plaintext email). With all of those technologies they were used to being in control of the styling of what they consumed. One person's styles for irc could be completely different than another's, and that was fine. The idea that things should be published with one rigid style which must be used to consume it was foreign and not all that attractive.

You could make a case that the only reason any of this happened when it did was because the browsers were reducing the amount of control the end users had on the page styling. If Mosaic had been super configurable in it's styling, it's possible it would have taken even longer for CSS to come about.

What burns me up is that CSS was supposed to allow site authors to have sensible defaults for their pages while enabling me to override them, and yet when I set a dark theme in Firefox all hell breaks loose.

Honestly, the web was better end-user experience for me on links in a terminal: I was able to read text and submit forms, and that's all I really need.

You can do that (of sort) in Firefox by

Preference | Content | Colors

* change the Text & Background colors

* uncheck system colors

* always override

Yeah, but the problem is that modern pages are written in such a way that that almost always displays garbage. I tried to do it for a few weeks, but it was terrible looking.

I've set it up to follow Solarized dark color scheme.

Yes it'll take some time to get used BUT totally worth it, especially when you're reading text a lot :D

I did test with light themes, I have to agree it looks worse compared to dark themes

> With all of those technologies they were used to being in control of the styling of what they consumed. One person's styles for irc could be completely different than another's, and that was fine. The idea that things should be published with one rigid style which must be used to consume it was foreign and not all that attractive.

It's still not attractive to me, which is why I force my styling in Firefox on all website I visits. Though there's not much styling can be done in Firefox by default, but pretty happy with it

I suppose this explains why I've never really liked CSS or the design-heavy web: I was never willing to give up that control over my machine. I still resent the way modern "user-agents" aka browsers give every jackass designer out there more control over what my machine is doing than they retain for me.

I feel your pain due to a different reason. After seeing how the major Linux UI toolkits support consistent theming of all the programs using them, original styles/themes be damned, the lack of a consistently-applied theming system/mechanism among web applications is all the more obvious. With web pages this generally isn't an issue (barring annoying/just crap ones) but with web applications, why should their visual appearance and design be arbitarily (from my perspective anyway) decided by a developer/designer with no effective way to alter/change/override this either cleanly or consistently over multiple applications.

Quite. I strongly dislike sites that cram the article content into 1/3 of the screen, while the other 2/3 is taken up by ads and previews of "relevant" content. Reader mode may well be the best thing to come to Firefox in ages...

PDF, TeX, MS Works, and even images with styled text in them were all around in 1993. Edit: I just realized that TeX and Word were mentioned right at the top of the article.

oh man...Prodigy. I remember using my ole 1200 bps modem to load up those .. EGA? VGA graphics? I can't even remember. I do remember my prodigy user ID though.

Yeah, I was a CompuServe guy, but it was pretty innovative and looked great at the time.

As a web developer, I see two main issues with web styling:

First, the web was built around sharing technical papers. That means HTML structure focuses on those elements that are relevant to papers (outline layout via H* tags, tables of data, not much else), and not the sort of things that marketing and sales want to push. (ads, rails/gutters, etc). Those of use that suffered through the early "slice-and-dice" method of making web pages are painfully aware of that. I'm a big fan of technical papers, and my expectations of flash and glitz are minimal (I, for example, hate the modern trend of not using the full width of my window.). Despite this, I feel we keep trying to stay true to the origins of the Web rather than allowing for the actual USE of the web.

Second, in an effort to keep the content machine-parseable as well as allow for agents of different devices, CSS is applied separately from content/structure (theoretically). Specifically, the concepts used DO NOT MATCH the concepts used in developing desktop applications. Even Flexbox, the most recent attempt to fix this, only loosely relates to the way desktop applications would layout the content.

I'm a huge fan of the GOALS involved in HTML/CSS, but after working on web stuff for over 20 years (not using CSS quite that long), I feel I can say it's been a failure. We've spent that all or most of that time with painful workarounds for basic tasks like: center content (particularly vertically!), Adding a left rail/right rail, filling the height of a container, matching the height of the visible window, making sure layered content is zindexed properly, and those are just the ones off the top of my head. We've invented and reinvented ways to do things like drop down menus, toggleable buttons, modal windows. Heck, from the very start people implemented their own authentication windows because the appearance and capabilities of the browser-based solutions didn't match the demands.

After 20 years, and with the benefit of all the experience of desktop development to add in, I feel like we shouldn't be fighting to manage such basic requests, that we shouldn't be reimplementing field validation and error messages YET AGAIN because even the latest advanced offerings just don't cut it.

We should be able to have:

* "flexible" content (appearances adjusts to visible space)

* machine parseable content

* attractive UI

...without it requiring the dramatic hoop-jumping we have today.

How else would things go when people use a tool for which it was not designed to do? It's an evolutionary thing. A need is identified, a feature addressing the need is proposed. Then new needs are identified, new features are proposed.

The problem wasn't in the tool, the problem was the tragic slowness in the browser vendors implementing the features into the actual browsers. Don't forget that for many of those 20 years we had competing standards of different browsers doing their own thing. If the vendor didn't like a proposed feature of CSS, it didn't get implemented. I wouldn't blame CSS for that.

Every complaint that I see about lack of features of CSS or how long it took for the features we do have to get implemented I blame the browser vendors.

This is a fantastic lesson in history. I have nothing more to say than this is what I come to HN for.

What a great post. I really appreciate these longer digs into the past that go behind the 'what' to explain the how & why of where we got where we are; and just as much the futures that could have been and why they didn't happen. I keep thinking there's room for a decent series discussing the evolution of Rust, since that design process was such a public thing.

I have a talk at the ACM's conference about the history of Rust, but with that much time, there's only so much you can do. The RFC process has helped document a lot, but there's still years of history before that.

I've never understood why you need CSS, I just do it all in VRML

The idea that 3D visualization was being considered in 1994 before CSS was even released is absolutely fascinating, and a testament to how forward-looking early Hypertext proponents were.

Skip down to Marc Andreessen's 'Future capabilities' for Mosaic: http://1997.webhistory.org/www.lists/www-talk.1993q1/0099.ht...

@fat's talk about this [42:07]: https://www.youtube.com/watch?v=iniwPUEbPUM

> It is pretty clear how this proposal was made in the era of document-based HTML pages, as there is no way compromise-based design would work in our app-oriented world. Nevertheless, it did include the fundamental idea that stylesheets should cascade. In other words, it should be possible for multiple stylesheets to be applied to the same page.

> It its original formulation, this idea was generally considered important because it gave the end user control over what they saw.

Content (i.e. ad) blockers are a logical extension of this.

Writing pages in a web site is a mess, polluted by different syntaxes, HTML, CSS, Javascript, jQuery things, MarkDown, ... It's a miracle that Wikipedia exists! Several works have been done to bring some unity, for instance Skribe, Scribble, LAML, SXML, but they generally lead to complex systems devoted to coders and forgetting web designers and, of course, beginners.

The {lambda way} project is built as a thin overlay on top of any modern web browser, and devoted to writing, composing and coding on the web, where the markup, styling and scripting are unified in a single language, {lambda talk}.

Commenting this work, somebody wrote this: « Reminds me of John McCarthy's lament at the W3C's choice of SGML as the basis for HTML: "An environment where the markup, styling and scripting is all s-expression based would be nice." »

The project can be seen here: http://epsilonwiki.free.fr/lambdaway/ or in https://github.com/amarty66.

Do you think that {lambda way} is on the good way?

And yet, if everybody followed this logic we would never get new standards.

To me, PSL looks the most promising, at least the conditionals would have come handy. CSS only 'recently' got features like the CALC() function which is a blessing.

However I have to agree with the decision to put it aside, because remember how implementing something seemingly simple as CSS went in the days if IE5,6. It was a disaster and something more complex like PSL would have been even worse.

I'd like to see CSS-shaped HTML like this:

  doctype html
  html {
    head {
      title Hello there
      script type='text/javascript' src='main.js'
    body {
      p class='one two' {
        span Sample text
Is there an HTML preprocessor using a similar language?

Jade comes somewhat close but opts for the offside rule rather than brackets and abbreviates class and id with . and # respectively.

Just FYI, Jade has been renamed as Pug, for whatever reason. https://github.com/pugjs

Legal reasons :)

Exactly, I'm looking for one that uses brackets instead indentation and the rest is left untouched (no abbreviations or at least not mandatory). I can't find anything already done but at the same time is hard to believe is not done already.

It's kind of like HAML: http://haml.info/

More like SASS, HAML introduces a lot of syntax and is indentation-based right?

Yeah you're right. I just meant HAML in that it's a less tag-heavy layout structure for html


> When HTML was announced by Tim Burners-Lee


Fixed, thanks!

While you're there:

"fatal flaw which would plauge" -> plague

Great article!

Got it, thanks!

I love how Bert Bos' homepage pretty much uses CSS to the fullest extent possible (as it should) https://www.w3.org/People/Bos/

If anyone missed it, there's a discussion with Robert Raisch, the developer who made the first stylesheet proposal, in the article's comments.

And several years before that, there was Motif Toolkit.

Love the RELEVANCE selector in CHSS. Could use that today.

I spent a long time learning XSL/XSLT. The theory was we'd represent the data on a web page with XML, and then determine how it's to be displayed with XSL/XSLT. I think browsers still support it, but it never caught on.

Which is very sad, because it's a totally sane way. Have a set of different low-level languages to describe a screen layout, a paper layout, a audio layout for the blind, a single high-level language that describes content, and three transformations written in a very powerful yet pretty much declarative language. What not to like?

For example, sometimes we may need to enumerate certain things, e.g. headings. Normally this is a part of the final typesetting package (LaTeX or MS Word, whatever). But with XSLT and XSL/FO we have a different share of responsibilities: XSLT computes the numbers and XSL/FO typesets what it's given, so it has one less thing to worry about. This is great, because typesetting is complex enough already.

Besides, XSLT way of creating styles (xsl:attribute-set) is one of most elegant I've seen. It's very simple (one element and one attribute) and very powerful at the same time: you can easily inherit styles and/or use mixins or combine these approaches and there's no ambiguity. Besides, it's generic.

Did a little of that, myself.

A lot of the fundamental premise there did actually happen, just with different technologies. Today, there's a lot of sending structured data (in forms like JSON) to server- or client-side templates and components that render in a declarative way.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact