Hacker News new | comments | show | ask | jobs | submit login

I started doing web development around 1999 or a bit earlier, NotePad.exe was my first text editor. By 2003 I was building websites in strict XHTML and CSS, and always avoided using presentation markup, OpenWeb.eu.org was one of my favorite resources, along with W3C specs.

Most developers I knew in person didn't care about CSS or didn't 'get' it. Things have come a long way since, in term of browser compatibility and tooling, but it always feels like a large portion of people who do web dev don't 'get' the web, for example, I am always surprised at the usage of frameworks such as Bootstrap in tech teams, you're basically redefining what you want your elements to look like using a combination of class names, which means you lose style separation and selectors. It's like a circle repeating itself.

People will always want to use the web to build their 'visual' or 'app' to look like they want to just as if they are designing a business card, but the web at its core is a web of knowledge, for it to be linked by other pages, browsed by bots, and is worth little without traffic or way to catalog that knowledge in the grand scheme of things. And that's why a bunch of people like me have always been pushing to have a semantic definition of the knowledge in your document, separate, and to be considered before its presentation.

Now we can make rich apps, and get pretty much pixel exact renders, but all the underlying philosophy remains: accessibility, context, semantics, and still should come before the styling in order of priority, and all the bells and whistles should ideally be implemented as an augmentation of the semantics.




Just composing proper XHTML was never enough to be very useful. The way that XHTML was promoted to web developers (my impression, I was only around for the tail end), was as merely a stricter version of HTML, and that's how IE implemented it, so that was the only practical way of using it.

XHTML on its own does hardly anything that HTML didn't do. It's easier to parse, but HTML was already parseable and anyone who was going to try to extract meaning from human-readable webpages could already do that. The real value of XHTML was that it could be generated from domain-specific XML using XSLT. So your data could be served as machine readable XML in a standard, versioned schema particular to the application or document publisher, and then converted into an XHTML webpage by the browser. IE didn't implement XSLT for too long, so nobody did this, but the idea behind it is a popular approach today. It's essentially the same as serving static HTML which then fetches its data by API queries using javascript. But this would have worked with javascript disabled, and would have made the human- and machine-readable data exist at the same URI, so that it would be clear to machines what data the human-readable presentation was representing.

Ideally content providers would also serve an XSLT transformation for producing RDF/XML from the domain-specific XML representation, so that generic web crawlers could understand the meaning of the data on the page, or the generated XHTML could contain RDFa markup. We still havent gotten to the point of re-engineering this functionality, and it may never happen, since the major web businesses' revenue model depends on maintaining exclusive control of their data, and the interoperability ideals of linked data are directly opposed to that.


> The real value of XHTML was that it could be generated from domain-specific XML using XSLT. So your data could be served as machine readable XML in a standard, versioned schema particular to the application or document publisher, and then converted into an XHTML webpage by the browser.

Ok, now I understand all the buzz around XHTML at the time I was a student, and why so many people kept talking about XSLT. I just never saw anybody pointing it before, and for something that interesting I have no idea how come people weren't talking about it everywhere.


>HTML was already parseable and anyone who was going to try to extract meaning from human-readable webpages could already do that.

Extracting data from circa-2005 HTML was a nightmare. It's only better now because libraries like beautifulsoup have gotten so much better at guessing structure, and even today I have things that just plain come out wrong because the HTML structure of what I'm scraping is so bad.


While I agree those were the goals, you don't need XHTML and XML for achieving these. The same is possible with SGML and/or other processors (and xslt can at least generate HTML, too). HTML and RDFa crawling on the web is a thing.

Saying as someone who coded a web app with browser-side XSLT and linked data ten years ago.


> I started doing web development around 1999 or a bit earlier

> which means you lose style separation

A key difference in that spread of time is that back then it was all about pages where now it is a mix of pages and applications. In an interactive application separating style from content becomes less important, sometimes adding detrimental complexity in the name of trying to be "pure".

It is still very relevant for actual pages that are about content, and there are many instances of content being lost in the mire of people treating what should be simple pages as if they are complex applications, but...

While content first, then basics, then add bells as optional extras, is still a worthwhile ideal and can save time & effort long term, it can consume more time short term and often people don't want to risk asking the world to wait for them!


> here are many instances of content being lost in the mire of people treating what should be simple pages as if they are complex applications, but...

But.... I think this is the norm, and not the exception. Sure, there are certainly a lot more "web applications* out there today than there were in 1999/2003, but I would wager there are far more "web apps that should just be pages" than legit app use-cases.


Every web page with a menu somewhere hacked out of nav and ul elements and some css would gain by including some application-oriented markup at their top level and restricting the text-oriented markup into the text area.

There's not a clear separation between "text" and "application" on the web. Nearly every site has both.


I disagree. You're right that there is of course no clear separation (the opposite also being true: most complex web applications will contain text content), but the differentiator is when the application side becomes large and complex enough to warrant a change in paradigm. Doing that for a little blog menu is overengineering.


Every page starting at the application paradigm, and explicitly changing when displaying text would fit almost all the cases better than application and text markups that don't talk between themselves.


I don't think that's true at all. It would appear that way if you're only considering the public, "user space" internet. I've done nothing but build web applications in the enterprise sector for the last several years, all of which could not be "just pages".


But for most human beings: the icing is the cake. We hurt ourselves by thinking that "content is king" outside our own beautiful wonderful nerdy circles.


Yeah but in certain circles, those non-icing-heavy websites might actually signal greater authenticity. I LOVE college professor websites that are full of great technical info, but look like they've not been designed at all.


Depends on the circle you're making the website for I guess.


The recent HN thread about reddit working on a redesign would be relevant in relation to that statement. What circle are reddit users? The site is a link aggregator with comments which many users will argue works well and has been successful because of its apparent lack of styling. Yet many other users complain that it isn't 'pretty' enough. But the point of the site, the links and comments are there either way.

One thing many designers and programmers alike struggle with are who their circle/users are and what they want. The problem, if it is a problem, with bootstrap is that it makes everything look the same. It's both smart and lazy from a web-dev's pov.


Recently I opened a subreddit for a little known Korean mobile developer, just as a fan, and I was surprised that you get zero control over the HTML.

At first I was a bit annoyed and confused but then I actually liked it, the more I had to get creative with CSS because although you can reinterpret a subreddit in 1000 different ways, they all "feel" the same.

I mean, I basically just gave an elevator pitch for CSS but I suppose my point is that Reddit works well as it is. You might have to do a bit of thinking outside the box but some nice designs are possible.

If you need anything more than that, chances are your design is a bit too over the top and should be simplified.


You get a tiny amount of control over the html with :before and :after content and removing elements with display:none and going crazy with complex selectors and absolute positioning


Or we disagree on the meaning of "content". If we talk about typical literature, sure we can agree the text is the content and typography and layout is presentation. But what about a game? Or visual art? What is the "icing" and what is the "cake"?


After 2 decades browsers are finally evolving to add the icing themselves:

https://support.mozilla.org/en-US/kb/firefox-reader-view-clu...


To my mind, Reader View removes most of the 'icing'.


Safari had this feature a few years back but appear to have removed it sometime about 3 years ago. It was great, but also removed all ads from the page, so it probably wasn't popular with online news sites.


This feature has been in Safari and Firefox for ages, and is still there. It’s awesome.


I use it regularly in Safari. I also have an HN-app on my iOS-devices giving me a similar "reader view" and removing fluff from the websites. I find it really pleasing when just reading articles (I've often caught myself going into devtools to remove navigation bars, sidebars, etc from websites when reading longer articles since they distract me).


ctrl + alt + r, btw ;)


I used Arachnophilia (old native application, not the Java based one), Frontpage and Dreamweaver.

HTML 4 Transitional used to be very popular back in the 1999/2000. The XHTML movement was a weird hype, I tried it out but TinyMCE/WYSIWYG-editor spit out old HTML3 code at that time, so I tried XHTML 1 Transitional, but used .html extension as IE6 didn't support .xhtml. Anyway XHTML2 was a trainwreck, they were nuts to propose an completely incompatible syntax, a failed ideology and I switched back shortly after to HTML4 (2004). There was also a weird short hype around a new incompatibility Javascript version 4, short E4X. XML used to be everywhere, and some nuts tried to make XML part of JavaScript syntax - Mozilla and Adobe were into this crazy land. Thankfully it died, and there was never a JS4, but the JavaScript 5 strict was superb. CSS was okayish since IE5.5 and especially IE6 and MozillaSuite. Then there was the hype to not use tables but do everything with CSS divs. I quickly learned that a few tables and using CSS for everything else worked well in practice and gave one a responsive design long before it had a name. Loading data asynchronous used to be possible with XML on both IE56 and Firebird/earlyFirefox already in 2003, I found only two sites back then that had documentation for that obscure API back then, but it worked great - made a CD-ROM based elearning software written as one-page-app (HTML4, CSS and JS) with loading data from XML files and showing text, pictures and multimedia videos (Flash MX, before there was FLV format), that was two years before it became well know as AJAX (now XHR).

https://en.m.wikipedia.org/wiki/XHTML#XHTML_2.0

https://en.m.wikipedia.org/wiki/ECMAScript_for_XML


E4X was a standard for XML literals in JavaScript supported by Mozilla's JavaScript, including Rhino. It was hated or unknown at best because "XML sucks", so Moz removed it a couple years ago. Of course, today folks cheer at React's JSX, which is more or less the exact same thing, yet comes with the Facebook licence. Goes to show how much irrationality there is in the evolvement of web standards. It's also something to keep in mind when judging use of the newest ES7+ syntax sugar on a project.


>Goes to show how much irrationality there is in the evolvement of web standards.

Well, for one E4X was never adopted by other browsers that mattered, only Mozilla.

Second, it's main use was for working with XML, which the web is glad we got rid of.


Oh, XML will soon be only remembered as that thing which used to stand for X in AJAX.


JSX is really just syntactic sugar for function calls, I suppose you could use React components as data structures but that's not really the intended use case.


XHTML hyper here.

Had the browsers properly adopted XHTML and the accompanying standards, and today we could already enjoy something like XAML on the browser instead of having only Chrome supporting WebComponents with workarounds like ShadowDOM to preserve local changes.


I also liked XHTML, but why do you see it as a transition path to something like XAML? Isn't it basically well-structured HTML?

I mean, I get that it's extensible thanks to XML, but that doesn't mean browsers would actually have an incentive to create and implement such extensions.


The XAML like parts were the set of XHTML components standards.

https://www.w3.org/standards/xml/components

Basically Events, Modularization, Fragments, XForms, XQuery and possibly other parts that were still being worked on when the HTML 5 hype started.

And were we are, yet to have a Web UI designer that can match Blend, Qt Creator, Netbeans Matisse, Scene Builder, Glade, Delphi, C++ Builder,....

When I do web development I always feel like I am stuck with something not better than Notepad for GUI programming.


Ok I have to disagree here. I believe the failure of XHTML was to cram everything into XML; I found XForms especially pointless. Now don't get me wrong, and as you know from my other posts here I'm as much of a markup geek as can be, but IMO markup is first and foremost for representing and authoring text. Just that browser content is mostly text, however, doesn't mean everything has to be markup. To the contrary, I believe XHTML (and XML in mainstream apps) fell out of favor because the spec authors tried to anchor each and everything on XML, rather than on something that makes sense for the task at hand (a phenomenon not unheard of for JSON and YAML as well). In XForms, for example, XML was used as a programming language which just never made sense. You know you're doing it wrong IMHO when you have to discuss whether you want to store your data in attributes or element content, a distinction that only makes sense for text data.

It's sad and surprising how much time and energy was wasted on XHTML. Back in february I met Steven Pemberton (XHTML spec lead back then, and of ABC/Python fame). He's just such an inspiring guy to talk to, but unfortunately, there was no time for recapitulating the XHTML situation.


> And were we are, yet to have a Web UI designer that can match Blend, Qt Creator, Netbeans Matisse, Scene Builder, Glade, Delphi, C++ Builder,....

Huh, my experience was all those things are terrible (spent some time with Qt and researching Glade, also inherited a Delphi codebase full of spaghetti code at one point).

Visual UI tools are great for prototyping, awful for maintainable code. Anyone I know who's spent any time with them ends up wanting to go back to the code.

There certainly are good Web UI designers [1], it just seems no-one wants them. There are successful visual Web UI prototyping tools tho.

[1] http://macaw.co/


Those tools are wonderful and make my life on native frontend projects enjoyable, versus the pain having to deal with of Web design, which I happen to have experience since the early days (first coded web apps were C based cgis in 1997).

Trying to manually hack GUI generated code is an anti-pattern.

One has to leave the code generated by the tools to the tools, everything else should live in other code files.

Sadly I never heard of Macaw, specially on the enterprise circles I move on, in any case they seem to be gone now.


You don't need XML for that. HTML can be parsed using SGML precisely and elegantly [1] (if I may say so for a project of mine).

If you see value in XML, you really should look into XML's big sister SGML. It gives you downward compat with XML, tag inference, type-aware (injection-free) variable and macro expansion, custom Wiki syntaxes (markdown and others), an integrated stylesheet language without new ad-hoc syntax, full HTML 5 parsing with short forms for attributes etc., and more. You might actually like it.

[1]: http://sgmljs.net/blog/blog1701.html


I was a big fan of Docbook, back when SGML was more well known, thanks for the heads up.

However this is meaningless if the browsers keep being a document engine, full of workarounds to translate a mix of document tags, coupled with some generic <div> and <span> into some kind of general purpose GUI layout engine.

Having to find the proper incantation of CSS3 transforms to portably trigger GUI acceleration is a good example of such hacks.

Even if it is yet another hack, webcomponents + shadowdom looked it would be the solution, even if a bit hackish, but it is Chrome only.


I loved Arachnophilia. Awesome little editor. Wasn't keen on Dreamweaver - I mean I guess it's ok if you're a designer but not really something I'd have ever recommend for developers. Frontpage though; you're brave admitting to using that!

My favourite of the failed web technologies was VRML. Back in the 90s I used to create epic 3D landscapes that would render inside the browser much like how some experiment with WebGL these days. But this was long before it was 3D accelerators were common inside PCs (they were still very expensive so most graphics cards software rendered 3D models). VRML I think was a victim of being too ahead of its time.

The web was exciting back then. Nobody really knew what you could or couldn't do with it. These days I feel we've lost something with all these bloated frontend frameworks which act like magic boxes in that an alarming number of frontend developers don't understand how their code executes. But I guess that's what happens when science or technology becomes business.


> Frontpage though; you're brave admitting to using that!

FrontPage was a great educational tool, sure it produced crappy HTML, sure it was IE biased, but it made you learn, it was fast and came bundled in Office. Countless numbers of hobby web pages were build thanks to it.


>Wasn't keen on Dreamweaver - I mean I guess it's ok if you're a designer but not really something I'd have ever recommend for developers.

Mostly because of cargo cult -- as Dreamweaver produced very clean HTML output.


> Mostly because of cargo cult

That was my point. If you're familiar enough with HTML to use Arachnophilia (which, if anyone isn't familiar with that particular editor, was basically just the Notepad++ of it's day) then Dreamweaver would just seem excessive. Hence why I'd recommend it for designers rather than developers ;)

> as Dreamweaver produced very clean HTML output.

Cleaner than Frontpage and most of the other design tools out then, sure. But my experience of Dreamweaver around that era was that it's HTML output still needed a lot of manual refactoring in a text editor afterwards. While it was better than most GUIs it was still a long way off "very clean HTML output".

I'm sure things have improved significantly in the last 15/20 years though. But that was certainly the state of things back in the late 90s / early 00s.

As an aside note: I won a Source Code Planet (anyone remember them?) competition one month with an entry I made in Dreamweaver. It was a mockup of a Window 95 or 98 (I forget which) desktop with functioning start menu. I always felt naughty for winning based on something I built in a tool that auto-generates a lot of the code for you but then other projects I released were a lot more credible (eg DirectX games) but far less popular. I guess it just goes to show how little most people cared for code quality even back then.


I remember Planet Source Code, it was the Github of the late 1990s. Everything from VB, C++ and HTML samples were there. A one stop community platform. I remember I coded a Windows 98 shell replacement, a Frontpage WYSIWYG clone, an Paint clone, DirectX8 games in VB6. Then MSFT canceled VB7, me people moved elsewhere. Then Sourceforge took over. And later Google Code took over. Then Github took over.


Sourceforge is not dead, it's just sleeping.


Well, technically PSC (Planet Source Code) isn't dead either. There still new VB6/etc code gets uploaded even in 2017. https://www.planet-source-code.com/vb/scripts/BrowseCategory...

GitHub is the current thing, PSC/Sourceforge/GoogleCode are still technically (at least read-only) around.


>Most developers I knew in person didn't care about CSS or didn't 'get' it.

It being a broken technology, used for tasks it wasn't until recently even remotely suited for (layout) probably played a role to that.


Yep. CSS disgusts me. It was simpler doing layouts with tables.


Use display:table in CSS, it is exactly the same display model as tables, just disconnected from specific HTML elements.


I was also a web standards man.

For me it was the realisation that I was more colorblind than I previously though and bootstrap meant nicer designs than anything I would be ablr to create myself.

So I fell back to bootstrap (unless I'm working with a dedicated html / css person - then I'll let them decide.)


Styling will keep a company alive long before a focus on accessibility would kill it. Semantic markup is borderline pointless, except to rationalize design elements and keep them consistent.

I think you're thinking like an engineer rather than a customer. The customer is more important than the engineering, as long as the engineering supports what the customer is trying to do. Very few customers rely on semantic markup, thus it is not very important.


    The customer is more important than the engineering, 
    as long as the engineering supports what the customer 
    is trying to do
That is, and always has been, pretty much the entire point of good engineering practices such as keeping one's markup as semantic as possible -- allowing us to deliver stuff more gooder and fastener to the customer.

I don't think anybody has ever been under the belief that customers really were gonna do a "View Source" on a web page and just marvel at your clean HTML.


Semantic markup is for computer to categorise and file. The easier google can figure out what your page is trying to do, more efficiently it can serve it into the right hands.


> Semantic markup is borderline pointless, except to rationalize design elements and keep them consistent.

Consider that from the beginning of the web, people could have composed pages entirely of image maps with hotspots to direct them (and some did).

Or as soon as JS arrived, they could have replaced hyperlinks with scripted behavior (as some did and many do now).

Where would Google have come from? Its foundation was largely in the semantics of hyperlinks.


> it always feels like a large portion of people who do web dev don't 'get' the web

My history is surprisingly similar to yours, I started in 1999, I used Notepad as my first text editor, and by 2003 I got caught up in the movement towards making markup strict, which I felt was the mark of professionalism. However, by 2006 I had mostly rejected the notion of "strictness". There were several things that turned me against strictness. One of them was Mark Pilgrim's essay "XML on the Web Has Failed":

https://www.xml.com/pub/a/2004/07/21/dive.html

Another problem was brought up by Sam Ruby: his daughter sent him an image, which he wanted to share on MySpace, but he couldn't. And the reason he couldn't was because the image was in a SVG format which required strict XML, and MySpace was, of course, very far from anything "strict".

Some people looked at the chaos of non-standard HTML and decided the Web was successful because it had been broken from the beginning, and it had learned to work well while broken. I reached a different conclusion. It became clear to me that what developers wanted to do simply had nothing to do with HTTP/HTML.

We don't yet have the technology to do what developers want to do. HTML was an interesting experiment, but it suffered from a dual mandate. Sir Tim Berners Lee wanted HTML to both structure data and also present it in graphical form. Almost from the start, developers were conflicted about which mandate they should obey, but the preference, since at least 1993, if not earlier, was to give priority to the visual. For all practical purposes, developers saw HTTP/HTML as GUI for IP/TCP. Previous IP/TCP technologies (email, gopher, ftp) had lacked a visual component, but HTML finally offered a standard way to send things over Internet and format the end result visually (emphasis on "standard"; I could insert an aside here about X window systems and some cool software of that era, but every app implemented their own ideas about visual presentation. X-Emacs, for instance, had its own system for visual formatting over a network).

That we now have so many versions of languages that compile back to Javascript, which renders HTML, shows that there is a great hunger for something that moves beyond HTTP/HTML.

The quote that Mark Pilgrim used at the beginning of his article is worth repeating:

"There must have been a moment, at the beginning, where we could have said ... no. Somehow we missed it. Well, we'll know better next time." "Until then ..." -- Rosencrantz and Guildenstern are Dead

He may have meant that ironically (since Rosencrantz and Guildenstern are murdered) but I like to think that we will eventually get rid of HTTP/HTML and replace it with what developers actually want: a pure GUI for IP/TCP, a technology with a single mandate, with no burden of also offering semantics or structure or hierarchy.


> a pure GUI for IP/TCP

What does that actually mean? There are all kinds of systems which use a TCP link to produce a display on one end, with all kinds of different design compromises, made for different reasons and use cases.

If people had, by some miracle, standardised the web as a graphical format back in the early 90s by dictating that screens had to be 4:3 format with a particular minimum resolution, what would have happened to smartphones?


That brought back memories of trying to use AOL in the wrong screen resolution.


"Cascading JSON Style Sheets" - I suppose it would need a snappier acronym. Serve pure JSON content endpoints, and like, a standard response header which points to the resources which can render them.


I like to think that we will eventually get rid of HTTP/HTML and replace it with what developers actually want: a pure GUI for IP/TCP, a technology with a single mandate, with no burden of also offering semantics or structure or hierarchy.

Isn't that what Flash, Silverlight, Java Applets and Canvas offer?


In the case of Flash, yes. But Java applets were designed to be embeddable applications. Much like with ActiveX. While it's true that sometimes those Java applets were just GUIs, but other times they went as far as using their own TCP/IP stack (eg for chat client)


Rosencrantz and Guildenstern are Dead is a play by Tom Stoppard https://en.wikipedia.org/wiki/Rosencrantz_and_Guildenstern_A...

The 'There must have been a moment..." quote is from the play. However it is a very ironic play, and it's been my experience that anyone who quotes it does so ironically.


HTML is ironically very nice as a standard, even if broken by design. Think about crawlers for instance, which would not be possible without HTML.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: