Most developers I knew in person didn't care about CSS or didn't 'get' it. Things have come a long way since, in term of browser compatibility and tooling, but it always feels like a large portion of people who do web dev don't 'get' the web, for example, I am always surprised at the usage of frameworks such as Bootstrap in tech teams, you're basically redefining what you want your elements to look like using a combination of class names, which means you lose style separation and selectors. It's like a circle repeating itself.
People will always want to use the web to build their 'visual' or 'app' to look like they want to just as if they are designing a business card, but the web at its core is a web of knowledge, for it to be linked by other pages, browsed by bots, and is worth little without traffic or way to catalog that knowledge in the grand scheme of things. And that's why a bunch of people like me have always been pushing to have a semantic definition of the knowledge in your document, separate, and to be considered before its presentation.
Now we can make rich apps, and get pretty much pixel exact renders, but all the underlying philosophy remains: accessibility, context, semantics, and still should come before the styling in order of priority, and all the bells and whistles should ideally be implemented as an augmentation of the semantics.
Ideally content providers would also serve an XSLT transformation for producing RDF/XML from the domain-specific XML representation, so that generic web crawlers could understand the meaning of the data on the page, or the generated XHTML could contain RDFa markup. We still havent gotten to the point of re-engineering this functionality, and it may never happen, since the major web businesses' revenue model depends on maintaining exclusive control of their data, and the interoperability ideals of linked data are directly opposed to that.
Ok, now I understand all the buzz around XHTML at the time I was a student, and why so many people kept talking about XSLT. I just never saw anybody pointing it before, and for something that interesting I have no idea how come people weren't talking about it everywhere.
Extracting data from circa-2005 HTML was a nightmare. It's only better now because libraries like beautifulsoup have gotten so much better at guessing structure, and even today I have things that just plain come out wrong because the HTML structure of what I'm scraping is so bad.
Saying as someone who coded a web app with browser-side XSLT and linked data ten years ago.
> which means you lose style separation
A key difference in that spread of time is that back then it was all about pages where now it is a mix of pages and applications. In an interactive application separating style from content becomes less important, sometimes adding detrimental complexity in the name of trying to be "pure".
It is still very relevant for actual pages that are about content, and there are many instances of content being lost in the mire of people treating what should be simple pages as if they are complex applications, but...
While content first, then basics, then add bells as optional extras, is still a worthwhile ideal and can save time & effort long term, it can consume more time short term and often people don't want to risk asking the world to wait for them!
But.... I think this is the norm, and not the exception. Sure, there are certainly a lot more "web applications* out there today than there were in 1999/2003, but I would wager there are far more "web apps that should just be pages" than legit app use-cases.
There's not a clear separation between "text" and "application" on the web. Nearly every site has both.
One thing many designers and programmers alike struggle with are who their circle/users are and what they want. The problem, if it is a problem, with bootstrap is that it makes everything look the same. It's both smart and lazy from a web-dev's pov.
At first I was a bit annoyed and confused but then I actually liked it, the more I had to get creative with CSS because although you can reinterpret a subreddit in 1000 different ways, they all "feel" the same.
I mean, I basically just gave an elevator pitch for CSS but I suppose my point is that Reddit works well as it is. You might have to do a bit of thinking outside the box but some nice designs are possible.
If you need anything more than that, chances are your design is a bit too over the top and should be simplified.
It being a broken technology, used for tasks it wasn't until recently even remotely suited for (layout) probably played a role to that.
For me it was the realisation that I was more colorblind than I previously though and bootstrap meant nicer designs than anything I would be ablr to create myself.
So I fell back to bootstrap (unless I'm working with a dedicated html / css person - then I'll let them decide.)
I think you're thinking like an engineer rather than a customer. The customer is more important than the engineering, as long as the engineering supports what the customer is trying to do. Very few customers rely on semantic markup, thus it is not very important.
The customer is more important than the engineering,
as long as the engineering supports what the customer
is trying to do
I don't think anybody has ever been under the belief that customers really were gonna do a "View Source" on a web page and just marvel at your clean HTML.
Consider that from the beginning of the web, people could have composed pages entirely of image maps with hotspots to direct them (and some did).
Or as soon as JS arrived, they could have replaced hyperlinks with scripted behavior (as some did and many do now).
Where would Google have come from? Its foundation was largely in the semantics of hyperlinks.
My history is surprisingly similar to yours, I started in 1999, I used Notepad as my first text editor, and by 2003 I got caught up in the movement towards making markup strict, which I felt was the mark of professionalism. However, by 2006 I had mostly rejected the notion of "strictness". There were several things that turned me against strictness. One of them was Mark Pilgrim's essay "XML on the Web Has Failed":
Another problem was brought up by Sam Ruby: his daughter sent him an image, which he wanted to share on MySpace, but he couldn't. And the reason he couldn't was because the image was in a SVG format which required strict XML, and MySpace was, of course, very far from anything "strict".
Some people looked at the chaos of non-standard HTML and decided the Web was successful because it had been broken from the beginning, and it had learned to work well while broken. I reached a different conclusion. It became clear to me that what developers wanted to do simply had nothing to do with HTTP/HTML.
We don't yet have the technology to do what developers want to do. HTML was an interesting experiment, but it suffered from a dual mandate. Sir Tim Berners Lee wanted HTML to both structure data and also present it in graphical form. Almost from the start, developers were conflicted about which mandate they should obey, but the preference, since at least 1993, if not earlier, was to give priority to the visual. For all practical purposes, developers saw HTTP/HTML as GUI for IP/TCP. Previous IP/TCP technologies (email, gopher, ftp) had lacked a visual component, but HTML finally offered a standard way to send things over Internet and format the end result visually (emphasis on "standard"; I could insert an aside here about X window systems and some cool software of that era, but every app implemented their own ideas about visual presentation. X-Emacs, for instance, had its own system for visual formatting over a network).
The quote that Mark Pilgrim used at the beginning of his article is worth repeating:
"There must have been a moment, at the beginning, where we could have said ... no. Somehow we missed it. Well, we'll know better next time."
"Until then ..."
-- Rosencrantz and Guildenstern are Dead
He may have meant that ironically (since Rosencrantz and Guildenstern are murdered) but I like to think that we will eventually get rid of HTTP/HTML and replace it with what developers actually want: a pure GUI for IP/TCP, a technology with a single mandate, with no burden of also offering semantics or structure or hierarchy.
What does that actually mean? There are all kinds of systems which use a TCP link to produce a display on one end, with all kinds of different design compromises, made for different reasons and use cases.
If people had, by some miracle, standardised the web as a graphical format back in the early 90s by dictating that screens had to be 4:3 format with a particular minimum resolution, what would have happened to smartphones?
Isn't that what Flash, Silverlight, Java Applets and Canvas offer?
The 'There must have been a moment..." quote is from the play. However it is a very ironic play, and it's been my experience that anyone who quotes it does so ironically.
Well, for one E4X was never adopted by other browsers that mattered, only Mozilla.
Second, it's main use was for working with XML, which the web is glad we got rid of.
Had the browsers properly adopted XHTML and the accompanying standards, and today we could already enjoy something like XAML on the browser instead of having only Chrome supporting WebComponents with workarounds like ShadowDOM to preserve local changes.
I mean, I get that it's extensible thanks to XML, but that doesn't mean browsers would actually have an incentive to create and implement such extensions.
Basically Events, Modularization, Fragments, XForms, XQuery and possibly other parts that were still being worked on when the HTML 5 hype started.
And were we are, yet to have a Web UI designer that can match Blend, Qt Creator, Netbeans Matisse, Scene Builder, Glade, Delphi, C++ Builder,....
When I do web development I always feel like I am stuck with something not better than Notepad for GUI programming.
It's sad and surprising how much time and energy was wasted on XHTML. Back in february I met Steven Pemberton (XHTML spec lead back then, and of ABC/Python fame). He's just such an inspiring guy to talk to, but unfortunately, there was no time for recapitulating the XHTML situation.
Huh, my experience was all those things are terrible (spent some time with Qt and researching Glade, also inherited a Delphi codebase full of spaghetti code at one point).
Visual UI tools are great for prototyping, awful for maintainable code. Anyone I know who's spent any time with them ends up wanting to go back to the code.
There certainly are good Web UI designers , it just seems no-one wants them. There are successful visual Web UI prototyping tools tho.
Trying to manually hack GUI generated code is an anti-pattern.
One has to leave the code generated by the tools to the tools, everything else should live in other code files.
Sadly I never heard of Macaw, specially on the enterprise circles I move on, in any case they seem to be gone now.
If you see value in XML, you really should look into XML's big sister SGML. It gives you downward compat with XML, tag inference, type-aware (injection-free) variable and macro expansion, custom Wiki syntaxes (markdown and others), an integrated stylesheet language without new ad-hoc syntax, full HTML 5 parsing with short forms for attributes etc., and more. You might actually like it.
However this is meaningless if the browsers keep being a document engine, full of workarounds to translate a mix of document tags, coupled with some generic <div> and <span> into some kind of general purpose GUI layout engine.
Having to find the proper incantation of CSS3 transforms to portably trigger GUI acceleration is a good example of such hacks.
Even if it is yet another hack, webcomponents + shadowdom looked it would be the solution, even if a bit hackish, but it is Chrome only.
My favourite of the failed web technologies was VRML. Back in the 90s I used to create epic 3D landscapes that would render inside the browser much like how some experiment with WebGL these days. But this was long before it was 3D accelerators were common inside PCs (they were still very expensive so most graphics cards software rendered 3D models). VRML I think was a victim of being too ahead of its time.
The web was exciting back then. Nobody really knew what you could or couldn't do with it. These days I feel we've lost something with all these bloated frontend frameworks which act like magic boxes in that an alarming number of frontend developers don't understand how their code executes. But I guess that's what happens when science or technology becomes business.
FrontPage was a great educational tool, sure it produced crappy HTML, sure it was IE biased, but it made you learn, it was fast and came bundled in Office. Countless numbers of hobby web pages were build thanks to it.
Mostly because of cargo cult -- as Dreamweaver produced very clean HTML output.
That was my point. If you're familiar enough with HTML to use Arachnophilia (which, if anyone isn't familiar with that particular editor, was basically just the Notepad++ of it's day) then Dreamweaver would just seem excessive. Hence why I'd recommend it for designers rather than developers ;)
> as Dreamweaver produced very clean HTML output.
Cleaner than Frontpage and most of the other design tools out then, sure. But my experience of Dreamweaver around that era was that it's HTML output still needed a lot of manual refactoring in a text editor afterwards. While it was better than most GUIs it was still a long way off "very clean HTML output".
I'm sure things have improved significantly in the last 15/20 years though. But that was certainly the state of things back in the late 90s / early 00s.
As an aside note: I won a Source Code Planet (anyone remember them?) competition one month with an entry I made in Dreamweaver. It was a mockup of a Window 95 or 98 (I forget which) desktop with functioning start menu. I always felt naughty for winning based on something I built in a tool that auto-generates a lot of the code for you but then other projects I released were a lot more credible (eg DirectX games) but far less popular. I guess it just goes to show how little most people cared for code quality even back then.
GitHub is the current thing, PSC/Sourceforge/GoogleCode are still technically (at least read-only) around.