Hacker News new | past | comments | ask | show | jobs | submit login
HTML is the Web (petelambert.com)
561 points by mmoez 23 days ago | hide | past | web | favorite | 328 comments



I pretty much agree with all the article. Back in 2001, when Web Standards were on the rise, people used to validate their HTML code. It was shown as a badge of prestige when you had 0 errors. Semantic HTML was a hot topic and most of the developers I worked with were a sort of HTML taliban. But the popularity of frameworks helped developers forget the basics and concentrate more on learning how to get the most out of these projects. If we also sum that I don't see true impact on SEO by having the best HTML I think it's pretty clear semantics and validation isn't relevant anymore.

When people ask me why SEO isn't working. Well, try to see the first results on Google, and check their code. Most of the times you will see worsen HTML validation and semantics trumped over better (not perfect), not mentioning the first page is mostly paid ads. Google is not like back in the 2000s. Things has changed.


All my webpages had the valid xhtml badge!

https://commons.wikimedia.org/wiki/File:Valid_XHTML_1.0.svg

I still have a hard time un-learning `<br />`


I currently serve real XHTML5 code on my website with the correct media type of application/xhtml+xml. https://www.nayuki.io/

This works properly in all modern browsers (Chrome, Firefox, Safari, Edge; PC, Mac, Android, iOS) and even Internet Explorer 11. Though in the past, I had to make concessions for older versions of IE, serving the same code as text/html instead.

I arranged things this way because I hand-write much of the HTML code on the site, and want to catch syntax errors as early as possible without the browser silently (and possibly incorrectly) fixing my mistakes. In any case, this is living proof that XHTML5 works.


At least in firefox this isn't handled as XML DOM document (they might still use the XML parser). XHTML pages used to be. On your page:

  document instanceof XMLDocument // false
On https://upload.wikimedia.org/wikipedia/commons/e/e9/SVG-Grun...

  document instanceof XMLDocument // true


Unlearning...why?

The lax nature of the HTML5 doctype did no one any favors. For most elements (`<br />` included) I still use a variant of XHTML Strict. Granted, it wasn't really necessary to have all the URLs and dates there, but to me, the only "correct" way to do this is with `<!DOCTYPE html>`.

The fact that validators don't choke on this is an accident of history.


Opening can of worms: did you serve them with any of the available XML MIME types? If not they worked simply because of the bug in the browsers (SHORTTAG means a different thing in HTML compared to XML) and all XHTML code served with HTML MIME type would be littered with ">" if browser treated it right ;)


I once toyed with the idea of serving XHTML page as application/xhtml+xml, but the fact the browser (back then, not sure about now) will just display XHTML error and doesn't render anything was a deal breaker for me.


By "the browser" you mean "Internet Explorer". The only browser that couldn't handle XHTML.


With XHTML any syntax error completely aborts rendering, there is no fallback - the browser will display a native error page instead.


Has nothing to do with what I said. IE couldn't handle XHTML until IE9

https://blogs.msdn.microsoft.com/ie/2010/11/01/xhtml-in-ie9/


Sorry, but you’re mistaken. The parent never brought up IE - he was referring to the fact that when serving XHTML with the correct mime-type (application/xhtml+xml), any error causes the page not to render at all. This behaviour is intended per spec and was the same across all browsers, not due to lack of support. It’s called “draconian error handling”, consequence of being XML, and was a major factor in the death of XHTML.


The death of XHTML was brought on by the then crop of kids who don't want to understand how computers work and want it all done for them with someone else's code cause computer science is too hard and they'd have to think and thinking is too hard.

While I often hear this "draconian error handling" about XML/XHTML by such people who then complain that a language compiler is just as draconian and real programmers complain when it's not and doesn't catch their every little mistake.

It's this lack of education and drive for the pursuit of knowledge and understanding that caused XHTML to fall out of favor and no other reason.


That's all well until you have external content (e.g. blog with comments or blog roll headlines). Many an enthusiastic Web author in the 2000s migrated off XHTML again. Not because getting it right is too hard (each fix for each problem is quite straight-forward and often trivial), but because when you have the choice between having a completely broken page/site until you have time to log on and fix it and having a miniscully broken part of a page that's easily ignored by a visitor, then the idealism was just too bothersome. Unfortunately, that's human nature. Also see: rms's principles as exemplified by his life-style and how many follow his example perfectly.


br is an empty element (https://www.w3.org/TR/xhtml1/dtds.html#a_dtd_XHTML-1.0-Stric...), so <br/> should work fine


In XHTML. No browsers treat documents (no matter the doctype) as XHTML unless the proper MIME type is used. Without it your XHTML markup is treated as SGML application (which HTML is) and thus <br /> means completely different thing. See http://jkorpela.fi/html/empty.html for the details.


<br /> is syntactically correct HTML5. So is <br>, but that's beside the point.

From https://html.spec.whatwg.org/#start-tags

> 6. Then, if the element is one of the void elements, or if the element is a foreign element, then there may be a single U+002F SOLIDUS character (/). This character has no effect on void elements, but on foreign elements it marks the start tag as self-closing.

From https://html.spec.whatwg.org/#void-elements

> Void elements: area, base, br, col, embed, hr, img, input, link, meta, param, source, track, wbr

The problem with making the SOLIDUS optional for so-called void elements is that the set of void elements isn't finite across time. A new one could be added in the future, which means any document which relies on implicit syntactic behavior requires an updated parser simply to get the most basic AST.

XML and XHTML formalized a distinction between syntax from semantics, permitting forward compatibility for code, like low-level parsers, only processing the syntax.

The WHATWG made the argument that out in the real world syntactically correct documents are almost the exception, not the norm. Because that's true the vision of being able to ubiquitously slice-and-dice documents with a shared syntax but distinct internal semantics was not attainable as a general matter. Any software consuming HTML out in the open universe would always need to be aware of contemporary HTML semantics even for low-level parsing. The insistence on separating syntax from semantics for HTML had a high cost but very little realized benefit.

However, the benefits are attainable within a closed universe, such as a CMS. And this is why HTML5 doesn't require, but nonetheless permits, XML- and XHTML-compliant syntax. It's not even treated as an error or exception, not in the way that other malformed but recoverable constructs are. A self-closing tag is syntactically valid, so there's absolutely no reason not to use it other than convenience. Excluding it out of convenience is perfectly acceptable, but in some situations--e.g. when using the more general and diverse ecosystems of XML and XSLT processors--it can be extremely inconvenient to exclude the SOLIDUS.


although nowadays there's HTML5, which isn't an SGML application. It allows (but ignores) the closing slash for void elements (like <br>), and specifies error recovery for non-void elements that treats it as if the slash didn't exist.


>I still have a hard time un-learning `<br />`

Oh boy, I'm supposed to stop doing that? Yikes ... when did that happen?


since blocking elements have been a thing, probably.


> un-learning `<br />`

wait what ?


`<br />` was the correct way in XHTML to do HTML5's `<br>`

(I think `<br />` still works in HTML5 or maybe browsers just don't mind it, but it's not the recommended way.)


Having an unclosed tag somehow seems wrong. What was the reasoning behind this?


Because HTML5 is not actually a subset of XML. Every element in XML must be closed, but in HTML5 certain tags (like <hr>, <img>, or <br>) are defined as "void" which means they have no nested content and therefore it's technically improper to close them (although I'd be shocked if any major browsers actually care about that). In other words, an <img> tag does not "open" the definition of a new section of the document the way <div> or <span> do, and it makes no sense to "close" something that was never "open" to begin with.


That's an arbitrary style choice in the spec. Saying "it's void so never use / " is no more natural than saying "it's void so always use /".

Arguably, it's bad style to use the same syntax for opening a tag as for a void tag, because it forces semantics into the syntax for trivial benefit. With out the "/", your HTML syntax parser now has to include a lexicon of all the void tags, and be updated with spec revisions.


Perhaps they made it implicitly self-closing.


thanks i never got the news, i hope browsers dont hate me


Haha I've known for years and I still use <br />. Usually the linter will nag me, but otherwise I just don't care enough to fix it.


i think the feeling of closure i get with the /> is addictive


The biggest takeaway is that HTML was originally designed to degrade gracefully. If your browser didn't support the intended graphical layout, it would still be able to extract the hierarchy and display a usable web page, even if you were using pure text (e.g. lynx). This kept the web accessible no matter what you were using.

These days, none of that occurs. Just try switching off Javascript (I use NoScript heavily to avoid invasive ad trackers). 75% of web pages don't load at all. Others completely mangle their layouts into an unnavigable mess. Only a very small subset of pages are readable. And these are just landing pages - pages that should welcome one and all to the site. Most of the time, unless I'm looking for something specific on the site, if it loads a blank or mangled page, I just move on. If everything's a dynamically-arranged <div>, the browser has no hope of making sense of it.

You can lead a horse to water, sure...


> The biggest takeaway is that HTML was originally designed to degrade gracefully. ... These days, none of that occurs.

This can be directly attributed to the fact that the purpose of the average webpage is no longer to convey information, but to display ads.

I frequently see people say that if a site doesn't allow people with an ad blocker to read their content, they immediately bounce off the site, and that site loses a visitor. What I don't think people get is that this is what those sites want.

They don't care about pageviews or impressions. They care about ad views. Someone with an ad blocker uses their bandwidth, but doesn't increase revenue.

To an extent, the more important conveying information is, the more likely the page is to gracefully degrade. Of course, even that is mitigated by the fact that most people throwing up a web page are either hosted by a commercial platform (Medium, et al), or at the very least using a popular framework that is designed with commercial usage in mind.


I hadn't considered this angle, but it does make a lot of sense. If a bouncer wasn't going to give a site ad revenue, the site doesn't want that bouncer in the first place.

This is a horrible state to be in. I mean, I get that advertising is the only way to make regular money on the web, but dear god, we are in a sad situation where people who don't want to be blasted with advertising aren't made welcome.


I hate to say it because I’m one of those people who bounce at a “turn off your ad blocker” notice but you’re absolutely right, and I had never considered before the fact that they don’t care that they’re losing my page view. They just want that ad impression. What a sad state we’re in right now.


I don’t think the ad part is the biggest influencer in the lack of ability for a website to degrade into a less styled form. I think the bigger factor is that people are also designing applications that run in the browser, and those make less sense to let degrade since they rely on a rich UI with lots of styling and behavior.


Web pages should be designed to load without js.


Web pages that don't show anything without JS are broken. Ones that show only a message about enabling JS are worse. It seems JS;DR was already a thing long before I felt the need for it yesterday, when I simply missed that article about synths at ableton.com because it was broken. https://indieweb.org/js;dr


What about web applications? I am the office champion of accessible web, but recently we made a web based game, in canvas. Without js, there's no game engine, so we had a noscript tag.

That synths article uses built in JavaScript synths (using audio apis). I agree the text should display without JavaScript but the purpose of the article (get hands on with synths) is eliminated without JS.

Basically, the web not only is a document delivery place, it's also the new Newgrounds. None of those games would work if you disabled flash, why should web applications (beyond ones that are simple documents) work without JavaScript?


> > Web pages that don't show anything without JS are broken.

> What about web applications?

I would say is depends upon the application.

A data entry and simple reporting service? That should definitely be accessible and can easily be made to work without JS. Though some scripting might be acceptable: modern screen readers will cope and holding back for people with ancient screen readers is no different to holding back because some people still use IE or Android 4.4 (though do some research/testing to see what they actually w{ill|on't} commonly cope with).

If it is an interactive game or similar then you are not going to replace that practically with form submissions and no JS, so by all means don't bother caring that it doesn't work without JS. Though do make sure you include a <noscript> tag if otherwise nothing useful would display, just so users know what is going on and there isn't a fault at their end or a fault in your app that they should report.

Anything between is a grey area: you'll have to use your best judgement of your actual and potential target audiences. Though again, make sure something useful displays for everyone even if that is just a polite "sorry, we can't get this working for you" message.


> but the purpose of the article (get hands on with synths) is eliminated without JS.

If there is nothing to read, there is no way to know there is any app, let alone decide that I want it to run.

> why should web applications (beyond ones that are simple documents) work without JavaScript?

I imagine that the other kind of web app does everything with POST and forms and such like it's 2003, and I can understand that you'd prefer not to. I'm not saying they should or that they must all do without JS. I'm saying that if the article was a document that (also) documented the proper use of the web app within it then I would have liked to have been free to read it first. NoScript blocks <embed>ded things all the time and I temporarily allow them all the time. That workflow would have worked here, but if (as you seem to suggest is reasonable) the entire article was blocked because its app would be useless, then my strategy wasn't even considered, and someone is doing it wrong.

However, if a goal was to teach everyone to just run anything and everything and stop caring about privacy and security because it's a PITA, then someone is doing it right. I sure miss the good old days of Weekly_Report.doc.exe


If you like synths on the web, check this out.

https://learningsynths.ableton.com/


Cheers, but that is the exact article we're discussing lol


One of those linked articles hit HN 5 years ago and almost nobody cared. I use NoScript since 2005, and I'm watching more and more things break, and it's "unpleasant"

https://news.ycombinator.com/item?id=9247871


Or rather anyone starting a new project should think long and hard "is this a page or is this an app?". Too many web pages think of themselves as apps without any real reason to do so.


Developers used to building apps find it much more efficient to build that way and, to an extent, it is. I've had developers straight up refuse to build "legacy" pages, and I understand their reasoning. Sometimes that is the reason, when another is not apparent.


What do you mean when you say "should"? In an ideal world? Maybe. As it stands though, there is basically zero incentive to do so and quite a few incentives not to. Analytics, ads, referral networks etc. All of these require JS (at least the most popular solutions of the day) and if non of this works, what exactly is the reason to show you the page?


> What do you mean when you say "should"?

Probably "according to the original designers' intent," see this comment below you: https://news.ycombinator.com/item?id=20284550


You could turn it around to say that screen readers should support javascript. Some already do. They can execute the scripts and _then_ read the resulting DOM tree.


This. People have forgotten why the World Wide Web was needed to begin with, and nowadays think of it merely as an integrated delivery/communication platform.


If it wasn't intended to be a communication platform, then what was it intended to be?


Web pages should be designed to be standards-compliant. While you are free to turn JS off (and that's part of what makes the web great), you shouldn't be surprised if things break. Your browser is no longer standards-compliant.


Browsers are not required to support JavaScript; in fact, HTML has a specific tag to support this case.


Yes, but the standard recommends against actually using it.

>The noscript element is a blunt instrument. Sometimes, scripts might be enabled, but for some reason the page's script might fail. For this reason, it's generally better to avoid using noscript, and to instead design the script to change the page from being a scriptless page to a scripted page on the fly, as in the next example

https://html.spec.whatwg.org/multipage/scripting.html#the-no...


I think it's important to note that a "scriptless page" does not literally mean "give me a blank white page unless I turn on JavaScript".


I feel like I'm going to be in a huge minority here but I am one of those developers that builds websites that won't load without js. Hear me out though.

I primarily work with ecom, and we build highly dynamic sites in an increasingly rapid arms race for engaging design.

Before vue I worked with jQuery like everyone else, controlling the dom manually to react to changes in application state. Those were dark days.

Now I can manage the cart state and increasingly everything else over apis and have the application display all these updates in real time, along with all the whizz bang the design team decides is flavour of the week. All in not much more time than jQuery and basic layouts previously took.

So, sorry if my site doesn't load without JS, but it's not about the ads. The web isn't just document display anymore, everything is an app.


I am not protesting the idea that a site should work without JS - quite the opposite, I accept that active content and user interaction requires JS (as much as I loathe the language).

What I am protesting, however, is that graceful degredation doesn't happen. The entire site either breaks or blocks you from viewing static content unless you enable JS. Why am I prevented from viewing a product's information page before I decide whether or not I want to add it to the cart? Why does a news article render as 3 columns of disjointed text that are unreadable until they're assembled? Why don't images load at all unless I permit a CDN to shove a load of JS into my browser for fancy slideshows, zooming, whatever, rather than just placing a small static image there so I can decide if I want to zoom into the dynamic larger image?

The worst ones are those that pop up a banner or overlay that darkens the page with the loaded article until I enable JS. Like, you have literally proved your page works without JS. There is literally no reason for me to enable it, except for you to load several different analytics modules that chew up my CPU and siphon as much identifiable data as they can get their hands on.


This. on the side,

> banner or overlay that darkens the page

Sometimes with ABP or UBO you can right-click and blow out that <div> which their JS would have cheerfully made invisible. Also, sometimes FF's "reader view" blanks everything but the meat of the article even if the article was hidden. (you probably know that-- for future reference and any passers-by, until the world breaks it all again. vive la révolution)


I'm savvy enough with the dev tools and CSS to set `style="visibility: hidden"` on most overlays, but yeah, I can imagine that trick isn't going to work forever either.


The dynamic dom frameworks (and webpacks) have a tendency to encourage moving everything into javascript. Generally this means that enabling gradual degredation requires building most things twice, once in static form and again for when vue et al have been loaded. Separating the two takes significant time and effort, which also introduces extra surface area for bugs and odd interactions. It is expensive to build this way, and there are other things worth prioritising.

Ultimately all website are optimised for their target audience, if 30% of our users used IE6 then we would (sadly) target that. If users value ease of use on a baseline browser then that is prioritised, and if static information and nojs turned out to be what most users valued highly then we would do that too.


You would think that SSR could take care of the problem and not require one to build twice. Most of these frameworks are very opinionated, if they cared about doing the right thing I don't think it would be hard to extend their opinions to make it so that content is tagged in ways that allow the framework to automatically generate the multiple views needed for graceful degradation.


> an increasingly rapid arms race for engaging design

For us, the arms race is that other one about getting code to execute on people's machines, including the virtual ones, versus preventing that. The mere existence of good code that is worth running makes it a hard problem. More of that just makes it more pressing even if not actually harder.

> Ultimately all website are optimised for their target audience

Ultimately all audiences are being optimized for the ideal website which shares its ideal viewing conditions with websites that are being used against the users (i.e. code runs even though we don't know or explicitly trust the authors or their employers or really anyone around it).


There is a big difference between 100% JavaScript everywhere and having Vue component to manage cart state though.

A large majority of whizz bang the design team decides, can be done just with HTML 5 and CSS 3.


Where it can be done with H5 and CSS3 we use those things, we don't invite complexity for the sake of it, but it's really very easy to run into the limitations of CSS3 when doing something even slightly unusual.

Vue replaces the dom of wherever the app is, so the choice is between having everything within a single app div or having a ton of apps talking to each other. Given that the ton of apps solution still breaks fairly essential functionality (for online shopping) and requires a fair amount more structure it's a bit hard to see the benefits.


Also before someone mentions it, I am using SSR where possible so at least something displays, it doesn't make one of these websites function well without js though.


> So, sorry if my site doesn't load without JS, but it's not about the ads. The web isn't just document display anymore, everything is an app.

Hands down, this was the comment I was looking for to agree with.


This graceful degradation doesn’t make sense in the application world. I know that’s not what the web was initially designed for, but this is what the web has evolved into. It’s now an application platform just as much as a document platform.


I feel like many people here are misreading the OP as anti-framework and I didn’t read it that way at all. Sure, there is an element of, “you don’t need to immediately grab the nearest framework as the solution to _every_ problem”, and that’s something I personally agree with.

But the point I think the author is trying to make is that whether you do or don’t use a framework, you need to understand the fundamentals. The fundamental language of a browser has always been HTML. JavaScript and CSS are great and came along in due time but before all that there was HTML. So whether you’re simply marking up a document or building a full-fledged web app with React or Vue, you’re doing yourself a disservice by not learning and using proper HTML.


Yes! It’s about semantic HTML. For example, starting a Vue component with a <section> or <ul> as root element.


> The fundamental language of a browser has always been HTML. JavaScript and CSS are great and came along in due time but before all that there was HTML.

IMHO in the sense of fundamentalism all major frameworks tried to reimplement and resemble HTML (or its dialect) and browsers on existing HTML and browsers, which eliminated divergence of platforms for developers, relieving them from dealing with variations of users’ environment, i.e. the differences of browsers and HTML implementations. It was a transfer of control from client end to server end, from users to websites, implicitly.


As an experiment, I started https://wordsandbuttons.online/ in plain hand-crafted HTML+CSS+JS (JS for the interactive bits only). Every page is self-sufficient, there are no dependencies, no dynamic data exchange, no HTML generators, - nothing. And it works so well, I don't really want any frameworks now.

It's perfectly malleable - I can make every page do exactly what I want it to. It's lean and fast - every page even with all the interactive plots, games and quizzes, and syntax highlighting, is still under 64K. And it doesn't accumulate technical debt. There are no dependencies, even within the project. An occasional bug from the past wouldn't bite me in the future.

At the beginning, I thought that typing the HTML by hand would be too tedious. But it's not. It is a bit boring, sure, but it's only like 5% of all the work so usually I don't really want to automate it.

And when I do, I just write scripts that handle HTML just like any other text. Works wonders for me.


It is truly astonishing how much people believe that they need frameworks.

Your site looks nice, but one comment I'll make about your coding style is that you (or anyone getting inspired from your code) should probably switch away from innerHTML as the default choice for DOM modification. In a lot of otherwise useful contexts it is an HTML injection hole. (For the record it doesn't seem to be a security issue for the kind of content you have made, but better kill the habit before it becomes a problem.)

For a while I used document.createElement and friends to generate injection-free DOM, but that style takes a lot more typing than innerHTML usually does. Eventually I made a tiny library for easing the job, and I haven't looked back since. I don't think anyone else uses it, but so far it is definitely worth it for the one happy user. https://github.com/NoHatCoder/DOM_Maker


I did something very similar back in 2006: https://programming.arantius.com/dollar-e


Good point! Thanks!


When you're making a site like yours (which is awesome btw) hand-coding HTML and JS works fantastically. You don't need a framework. Frameworks only really start being useful when you're working at scale either across lots of data-driven pages or with multiple developers on the same product.


Yes, that's true. If you have lots of work to automate - sure, you need something to automate it.


Congrats for this simple and beautiful website!

I wonder how you manage your overall style, though. Is your design set in stone, do you never change it or fix quirks? If you e.g. change from GitHub to GitLab, would you edit the footer of every single page?

One small suggestion: Please advertise your RSS feed to the browser using the <link rel="alternate" ...> tag. Ideally on every page. Otherwise, visitors of your blog articles (like myself) get the wrong impression that you don't provide a feed, until they go back to the main page and scroll down to the bottom.


As an aside, I like your blog and writing style, and the questions dropped in to make it interactive. COBOL ... I got that one. Funny!


I recently used Semantic-UI (http://semantic-ui.com/) in a project. Semantic-UI is a web framework where "semantic" means there are human readable class names like

   <button class="minor red labeled button">
Compared to Bootstrap, Semantic-UI ships with high level "components" ("views", "modules", "collections"). And here comes the ugly: They all abuse HTML as if it was 2008. Div soup everywhere. Even lists have to be formatted as Divs, something I havent done since ages, something like:

   <div class="ui list">
     <div class="ui list item">...</div>
     <div class="ui list item">...</div>
   </div>
It really felt as if the knowledge about semantic html tags got lost somewhere. Or the authors of this popular CSS framework have another understanding of "semantic". It doesen't meet my quality of structured HTML, thought.


https://tailwindcss.com/docs/utility-first/

You could always use a little classname vomit to go with your div soup.

The problem is that "modern" development has misplaced importance. It's become an incredibly selfish practice. There is more concern placed for the developers than the users.


“Semantic” eh? I don’t think that word means what the authors of the framework think it means. :)


This only seems beneficial if you're attempting to break compatibility with the browsers' built in understanding of ul and li tags (perhaps some concern that browsers will handle those tags differently in the future). Otherwise, they're just reinventing the wheel for the sake of doing so which may make for a fun exercise, probably shouldn't be used in a real project.


Oh my! I didn't believe that code example you just gave, so I had to go look it up for myself. I'm astounded.

https://semantic-ui.com/elements/list.html#list


to be somewhat/slightly fair, the very next example talks about using a `ul` for the list.


But according to the docs that's "for convenience". Its semantic value isn't discussed.

So presumably the 'proper' way to do it, if you're not looking for convenience, is to use `div`s. /s


this seems like it's "semantic" in a way that is completely useless for every use-case of semantic markup.


On the practical side, that could get one sued for ADA (accessibility) violations. The "reading machine" will not understand framework-specific classes.


Web pages started as documents, and have gradually evolved to support ever more complex uses, so that now they are effectively a platform for application delivery (and a very good one) as well as for document delivery. Oftentimes web applications are a mix of both data delivery and data manipulation, so the line is not clearly drawn. This has inevitably created a tension between those who see apps as the only future, and those who see docs as the purpose of the web, and apps as an afterthought (a camp the author of this article falls into). I don't think it's useful to attempt to set up a dichotomy between Apps and Docs and pick a side. Very few websites are pure document stores, and very few apps don't require documents, urls for resources etc.

Personally I think the web should evolve to play to its strengths: Simplicity, Flexibility, Separation of data from code and from style.

It'd be nice if instead of trying to shove javascript into every hole because it happened to be bundled with browsers, we pushed browsers to bundle more runtimes for client-side work (a WASM standard library if you like), so that the web can continue to be a place where you can post about javascript on a server running arc software, and not care which technology the server or server uses, which can be swapped out at will. One of the great joys of web development from my point of view is not being tied to a particular language or ecosystem, because the web is heterogeneous.


I couldn't agree more. HTML is so basic that you have no excuse to not do it in the ballpark of right (accessibility can be tricky and right is not always completely clear, but in 90% of cases you should have a good idea). Using a solid css reset solves most of your visual issues with using semantic elements/structuring following proper standards. Use the reset and then write css however you want to. The current state of the web (being nothing but an advertising platform for the most part) is what is driving a lot of people to other protocols. A return to sanely structured CONTENT over pretty/fancy/neat/whatever looking would be, in this writers opinion, a wonderful thing. I know in a perfect world we'd have both, but that is not how things have played out for the most part. Maybe the next thing, that replaces the web eventually, will get it right. :-/


A Web page is a document.

No it's not. It never was. Even a long time ago when websites were static things rendered by the browser once and then left alone there was always a tree of nodes underlying everything. The only thing that's changed in the past three decades is that now when we make websites and web apps we often ship a little JS application that lets the user modify the underlying tree. That's what the web is now and HTML is just what we use to describe the initial state of the tree (and arguably it's the wrong language for that job, but hey ho). Using semantic markup is important for a bunch of things like accessibility, but you can't really expect the initial state of the page to stay the same after it's loaded, and software that renders things from the web has to deal with that.

If you're old enough you might remember that once there were no stylesheets. HTML wasn't just the content and structure, but also the visual description. We used font tags. We used color attributes. We used spacer gifs. We don't do that now because it sucked. We moved forwards. The same is becoming true for HTML. Users demanded interactivity, and we gave them exactly that, and browsers turned in to things that run little applications rather than things that render mark up. The web has grown and evolved and moved on, and thinking of the web as "HTML pages" is just plain out of date.

That's not to say static content isn't still useful. It is. Very useful. But static HTML pages isn't the web, and the single page application genie isn't going back in the bottle.


The truth is both web "pages" and web "apps" are documents, and both are the web. Web pages which use javascript to render text or fetch remote content by AJAX can still be as much documents as fully static HTML pages, if their primary purpose is to convey textual or visual information. And fully static sites can be made interactive without javascript, though backend processes.

Viewing "pages" and "apps" as wholly separate and mutually hostile paradigms seems to be a recent cultural development born more out of frustration with advertising and modern complexity than any technically meaningful distinction between the two.


> Viewing "pages" and "apps" as wholly separate and mutually hostile paradigms seems to be a recent cultural development born more out of frustration with advertising and modern complexity than any technically meaningful distinction between the two.

Well there is a pretty fundamental tension between programs written in a Turing-complete language and a program written in one that is not. Resorting to too much power too early is dangerous and messy.


>Well there is a pretty fundamental tension between programs written in a Turing-complete language and a program written in one that is not.

Maybe, but you still can't consider anything running javascript an "app" and anything not a "document," since interactivity and state isn't bound to running javascript, and most javascript is just used to render documents.

My point is that the distinction people draw between the two is more emotional than technical.

>Resorting to too much power too early is dangerous and messy.

To be fair, notwithstanding exploits in the underlying browser or hardware which lead to things like Spectre, at least javascript is sandboxed. No one is going to be able to sneak "rm -rf /*" into a script and have it work the way they could a native application.


> To be fair, notwithstanding exploits in the underlying browser or hardware which lead to things like Spectre, at least javascript is sandboxed. No one is going to be able to sneak "rm -rf /*" into a script and have it work the way they could a native application.

It is fair to note this, but I would still take those examples as indicating the inherent dangers and failures of the paradigm. It's obviously infinitely better than just naively sticking native code execution into the browser without any protection at all, but that does not mean it is ultimately a good approach.


> Even a long time ago when websites were static things rendered by the browser once and then left alone there was always a tree of nodes underlying everything

... yes ? the name of this "tree of nodes" is not "Document object model" for nothing.


I know. The problem with that name is people seem to stop reading after "Document".


Though the web is made of documents, not the objects within.

There's no native way to link to specific objects in the DOM that are not identified as part of the document through an ID; the anonymous objects in the DOM can only be accessed programmatically, URLs have no syntax to traverse the tree like there is an "#anchor" section for accessing chapters.



> We moved forwards. The same is becoming true for HTML. Users demanded interactivity, and we gave them exactly that, and browsers turned in to things that run little applications rather than things that render mark up.

As I like to occasionally point out in wasm threads, the browser was a thing that ran little applications all along, in the form of Java applets. I have yet to see an explanation of how wasm applets are a different approach to the problem, but they sure are a more popular one.


The difference is timing. Computers weren't really ready for applets back in the mid-2000s when Java was pushing to be the thing that made web pages interactive. The performance (in terms of download speed, start up, and actually running them) was usually pretty horrible. Now the hardware has caught up it's the language that's the bottleneck, so WASM might prove to be a decent idea.

Also Java tried to be a gateway out of the browser to the OS, and that's just a terrible idea for security. WASM isn't trying to be that, and hopefully it never will.


>I have yet to see an explanation of how wasm applets are a different approach to the problem, but they sure are a more popular one.

Given that almost every thread about WASM includes someone like yourself wondering why it isn't just a pointless rehash of the JVM, I have to assume you would have already seen the numerous discussions about the differences at length.


> I have yet to see an explanation of how wasm applets are a different approach to the problem

Here is one: wasm is smaller and better sandboxed


> Users demanded interactivity

(Citation needed.)

Developers love interactivity. Users despise it but put up with it because there's not much choice.

Does anybody think the Web is a better experience now than it was 15 years ago? Honestly?


Yes. I do. Not that my opinion matters much. I was born in 91, I grew up as the internet grew up. Before I understood HTML or anything about programming, I used the web to watch videos on Lego.com or play really bad flash games on Newgrounds. My experience with the web has always been visual and interactive. Only time I used the WWW for purely document purposes was when doing homework.


(Citation needed.)

They didn't demand it per se because no one actually asked. I'm inferring from the fact that users spent much more time on the interactive websites which resulted in more revenue for those sites and everyone else copying. There might have been a reason other than the interactivity itself, but we are where we are now because of user behaviour.


How efficient was that increased time spent? Does the user actually read more text or are they spending more time just fighting with the site layout? I guess the answer doesn't matter if there is an ad on the page.


> How efficient was that increased time spent?

How efficient? The most popular sites on the internet are about _fun_, not _efficiency_ ;-)


OMG, that is perverse. Users spend more time on the sites because the sites are slower to load and more difficult to use.


Re: [A Web page is a document.] No it's not.

It started out that way, but grew to TRY to replace GUI's, Flash, etc.

We need three different standards: one for documents (HTML may be good enough); one for media, art, and games; and one for desktop-like GUI's. There may be some overlap between these, but the one-size-fits-all of HTML/CS/JS has been a big messy time-sink where otherwise simple UI tasks take rocket science and luck to get right.


We already have 3 different standards: HTML, CSS, JavaScript.

These are low level enough, and powerful enough that you can build document, multimedia or application frameworks on top of them. Simple UI tasks don't take rocket science, they take a proper understanding of those three standards, and a lot of discipline.


Re: "Simple UI tasks don't take rocket science, they take a proper understanding of those three standards, and a lot of discipline."

I appears to me you are contradicting yourself. It comes across as: "It doesn't take the discipline of rocket science, but merely the discipline of rocket science". GUI's didn't take "a lot of discipline" in say VB-classic, Delphi, or Oracle Forms[1]: you dragged it to where you intended it to be, and Wazaam, it was there and always there. WYSIWYG was a huuuge time-saver. Now to do it right you have to test on dozens of platforms and versions because they each have a mind of their own. WYSIWYG gave you one central coordinate reference point, not 30 different positioning engines.

Re: "We already have 3 different standards: HTML, CSS, JavaScript."

That's part of the problem, not a solution. They are not domain-specific, for one.

[1] They had glitches, but were getting better over time.


rocket sceince !== proper understanding

rocket science !== discipline

VB-classic, Delphi, or Oracle Forms have whats known as drag-and-drop/visual programming. Comparing them to CSS & HTML isn't a fair comparison, unless you compare that to a drag-and-drop GUI that creates HTML & CSS for you.


HTML & CSS can't do WYSIWYG sufficiently, especially if text is involved. Thus it's apples to black-holes.


> HyperText Markup Language (HTML) is a simple markup system used to create hypertext documents that are portable from one platform to another.

Source: The HTML 3 Specification.

https://www.w3.org/MarkUp/html3/intro.html


It is, definitions of HTML are widely clear: "Hypertext Markup Language (HTML) is the standard markup language for documents designed to be displayed in a web browser." If a document has interactivity, that doesn't mean it will lose its definition.


That's why I suggested maybe HTML isn't the right language for defining the initial tree state of a web app these days, but it's what we've got so it's what we use. I think web apps, and web pages too to some extent, have outgrown HTML.


Maybe in the future, W3C would do a solid AML Application Markup Language, but they did that with XML.


You've drawn a false dichotomy between document and tree. A tree is a mathematical abstraction, a document is a text-based file, and markup languages are the intersection of those. Web pages were always documents according to the HTML specification, as one of your replies linked to.


To be fair, HTML started as an SGML application, meaning, as a declarative language. In this aspect, it's not so much about static vs dynamic, but about two cultures of programming. However, there's a place for both approaches – and probably it's about the right mix.


This seems to be a matter of definition : IMHO single-page applications are definitely NOT the Web, but a different subset of the Internet instead (even if you can access them via HTTP).


>But static HTML pages isn't the web, and the single page application genie isn't going back in the bottle.

Single page applications are the corporate web. The profit web. The web that exists to squeeze information and money out of you without ever sharing or exposing any of itself. SPA are a cancer on the web that aren't going back in the bottle. Corps are gonna corp and institutions will follow.

But you, the single web dev, don't have to take this bullshit design philosophy home with you. Just because you have to shovel manure at work doesn't mean it's beautiful or that you need to bring the shovel home for personal projects. Get paid for shoveling shit. At home make real web pages.


Unpopular opinion... but I think this is a too limited view of the web. My contrary opinion is that the actually important feature of the web is the URL, and the important feature of the browser is that it provides a sandboxed platform for running untrusted code without a builtin "walled garden moral police".

It would be nice if operating systems would be exactly that, but for some reason "commercial" OS vendors are not able or willing to provide a safe "native" sandbox completely uncoupled from curated app stores.

The ability of a browser to be a document viewer with a very limited layout system should be implemented in an optional layer on top of a bare bones browser.

Again, this should all be provided by the underlying operating systems (click an URL and instantly run cross-platform applications in a safe sandbox), but for some reason we didn't get that. It's the same with Electron apps by the way, those wouldn't exist if operating systems would come with cross-platform UI frameworks that are actually better than the unholy mix of HTML+CSS+JS. But Microsoft, Apple etc... had decades to get their act together without delivering.


I'm inclined to disagree, but I guess the real problem is that webpages on the internet are being forced to serve both purposes.

Paradigm 1 - interconnected, hyperlinked web of text-and-content-based documents: open, accessible, amenable to indexing and tooling and so forth.

Paradigm 2 - delivery vehicle for cross-platform, full-featured, somewhat-security-sandboxed, applications.

I agree with the article that it's sad if Paradigm 1 is trampled to make way for Paradigm 2, just because (as you point out) we haven't been able to develop a viable alternative. In theory these could coexist, but the point of the article is people are being trained in Paradigm 2 and try to use it for purposes that are much more suited for Paradigm 1.

Just as sad, Paradigm 1 is more oriented around decentralization, giving power, options, and information to the user; Paradigm 2 is often exploited for lock-in, control over user's decisions, collecting information about the user, etc.


I feel like this gives an overly rosy picture of the past—

The need to serve Paradigm 2 existed all over the place in the plain html days.

It’s just instead of having standards-based languages for Paradigm 2, we used to embed janky Flash SWFs and Java applets, were dependent on single vendors to patch zero-days in closed source code, and had to pay hundreds of dollars for licenses for developer tools.

Today’s Paradigm 2 now has multiple competing implementations, open standards, and powerful DOM inspectors in almost every web browser to tear apart any modern Paradigm 2 app you come across.

I think today’s Paradigm 2 gives more power to users than the Paradigm 2 of the past :)


Absolutely agree with this. The Internet-as-appplication-platform has never been better, from a developer's perspective.

From a user's perspective, it's maybe a bit more nuanced... I'm tired of websites trying to get around my adblocker, and everyone attempting to enable desktop notifications, for example. If I'm trying to read someone's document they've published, I'd like it to just be a document, thanks. But, increasingly, "Paradigm 2" webapps like Spotify and Google docs are as good or better than "rich" desktop apps.


The "notifications" request REALLY pisses me off to no end. A simple "Subscribe" button (if the notifications API is present) would be enough to trigger the request. I wish I could just turn it off all around in the browser... Looking now... Nope, at least in chrome, cannot just disable the feature.

Also agreed, many web apps these days are as good or better than desktop. I actually have worked and am working on applications that replace their desktop counterparts. They're practically easier to scale, tend to have tooling to handle window sizes better, customization is easier and imho just nicer to write against. Better still if you don't have to support legacy browsers. Most browsers today are supporting modules and async import. Still using Webpack and Babel for JSX, but really close to a point where I'd just assume write for esm and have a server-side translation for JSX on demand (cached).


Funny, I think you actually pinned that interpretation yourself - the post was meant to discuss use cases of the web today and doesn't mention the past at all! Regardless, I see your points, but I'd rather compare today to an idealized future and discuss how we'd prefer it to be.


Paradigm 1's security/privacy (WebRTC and webttorrent ip leaking, canvas finger printing) are being compromised in sake of striving for fancy stuff in Paradigm 2.

Browser vendors (Firefox, Chrome, Brave) please get your act together to prevent https://nothingprivate.ml


For what its worth, I just tried this on Firefox 67.0.4 and it didn't work. It couldn't find the name I typed after I loaded from a private window.


It didn't work for my friend when done via Chrome on Android, and when I tried it, i got someone else's name in incognito


I think what the author is saying is that paradigm 2 is built _on top of_ paradigm 1 and if you can't do #1 right, then by definition _neither_ can be right. It's astounding the number of interview candidates who are super-fluent in JS but can't tell you to save their lives why it's a bad idea to put a click handler on a div. React will die (It's already started to) and we'll migrate back to the actual Web at some point and for my money it can't happen fast enough.


> It's astounding the number of interview candidates who are super-fluent in JS but can't tell you to save their lives why it's a bad idea to put a click handler on a div.

Go on?


Using a button here would be more appropriate for a few reasons:

Buttons by default have keyboard support for focusing and clicking. You’d have to manually add this to a div.

Screenreaders, scrapers, and other automated tools reading your html understand buttons and treat them differently from divs.

It’s easier for other devs to understand your code at a glance if clickable elements are buttons.


Putting a click element on a div is transparent to accessibility systems. And no, it isn't possible to consider an element with an onclick the same as a button, because events bubble, which means the element with the event listener isn't necessarily the element being watched for a click. IIRC there even used to be a "good practice" where people would attach a single click handler at the root and use it to dispatch events based on the `event.target` instead of attaching a separate handler to each target.


React does that, they call it synthetic events, but it's hidden from the developer. In jQuery it used to be called event delegation, made it easy to handle events on dynamic elements without the need to constantly attach/detach handlers.


Because the user can't hover over it and see where the link goes. Because the click handler represents a Javascript program that has to be downloaded. Because it can't be used with Javascript turned off -- which is increasingly necessary these days since ad networks abuse Javascript.

But mostly because it's usually unnecessary and smacks of laziness on the developer's part.


Because you've literally just described what a <button> is.


That would mean that anything that responds to click events is a button. From your experience of using UIs, you can probably think of many more UI elements that respond to clicks and are not buttons


> React will die

Hopefully, but people have a large capacity to deal with broken web software. Back when I was young, it was all the Java & .Net websites that turned every friggin link into a POST form so that navigating back, and opening in a new window was completely broken.


> React will die (It's already started to)

I read a lot of articles about front end, including framework & library trends, usage statistics, rumors, etc., though I've never heard that React has begun to die. Can you provide a citation?


Nothing about React stops you from correctly putting click handlers on buttons.


Well, the behavior is a bit awkward, and older browsers don't even work right. Of course, some older browsers that shall not be named, you couldn't disable certain styling attributes of buttons, which means an anchor may have been more appropriate back then.

I tend to prefer semantic tags most of the time. I do wish there was a CSS declaration, or maybe an <html> attribute to start off with absolutely NO styling, so that it all comes up from what's in the CSS only. Various resets suck or don't work well, or are really big for various reasons. For that matter, being able to reset everything under a given element that way so that in-browser rules aren't applied would be really nice for WebComponents.

My own weakest point is probably on CSS these days, I just haven't really kept up since 2.x as I haven't really needed to, and can usually google when I need more. I feel the flexbox stuff is more complex than it should be.


I’ll bite: Why is it bad to put a click handler on a div? Because we’re losing the semantic content of the markup?


Because there's already an element for that. Putting a click handler on a div is a hack.

In a clean (from scratch) dev situation, when would it make sense to use the div option?


People have been putting event listeners on div long before any framework was around. Tons of libraries and documentation show how to attach a click handler to a div (ex: https://api.jquery.com/click/). I can think of some cases... maybe the div is contenteditable and I want to trigger some analytics. Maybe I want to listen to all the children of a container rather than apply expensive listeners all over. Do you put button tags around all your tooltips? You might end up making your HTML look right, but now you are jumping through hoops to make it not look like a button. As with all things, it's naive to say "never do something."


For one, it won't play nice with accessibility helpers... screen readers, keyboard interaction, etc.


While I agree semantically use of <button> can be clearer, clickable divs can still play largely fine with most accessibility technologies if ARIA attributes are used.

Does it semantically make the most sense? Not necessarily, but it isn't a blocker to most accessibility tools and use and support for ARIA attributes is pretty widespread nowadays, including all of the major browsers and screen readers.

> https://developers.google.com/web/fundamentals/accessibility...

> https://developer.mozilla.org/en-US/docs/Web/Accessibility/A...


Number of times I remember seeing an an app using `<div onclick>` with proper aria attributes: 0.


It's often a legal or contractual requirement when selling software to public bodies in many countries, so I've seen it used a lot. Mileage may vary in other markets.

The important point is that fundamentally using a <div> in place of <button> can still be made to work easily with these accessibility tools - many comments here imply it can't be done at all and only a button will work. Does it semantically make sense? Likely no, but it isn't a hard blocker on accessibility.


Reinventing the wheel is bad


Don't we kind of have Paradigm 2 on our smartphones and tablets? It's not perfect but it's something.


I agree it's better in some ways than the desktop programs ("apps") situation


We should replace DNS with DHT, then we can have content addressable (or otherwise verified) apps sideloaded or loaded in chunks from anywhere like in BitTorrent or Dat.

To me, the Browser is the most widely distributed sandbox runtime ever! It’s just missing a few crucial things we plan to add soon :-)


Have you seen https://github.com/staltz/dat-installer ?

It can be used to update itself, too :-)


I'm not quite sure how this relates to the original article, but:

> the important feature of the browser is that it provides a sandboxed platform for running untrusted code without a builtin "walled garden moral police".

> for some reason "commercial" OS vendors are not able or willing to provide a safe "native" sandbox completely uncoupled from curated app stores.

The key word here is absolutely "unwilling". All the platform vendors want platform lock-in, they don't want a general purpose platform. Microsoft originally tried to do this to the web with ActiveX, before even Flash. At least Flash was slightly cross-platform in that Adobe provided Mac and Linux versions. These days no platform vendor is going to give up their 30% cut voluntarily.

In a world of fully general app sandboxes, you can switch out your hardware and OS at any time without disrupting your life or business. Therefore those are commoditized, and their price drops to the marginal cost of production - which for software is zero.

(Also I think CS theory took a while to catch up to practical security; everyone used to teach the model in which multi-user systems were trying to protect users from one another, not one where a single-user system is trying to protect the user from predatory apps, viruses and potentially unwanted programs.)


> In a world of fully general app sandboxes, you can switch out your hardware and OS at any time without disrupting your life or business. Therefore those are commoditized, and their price drops to the marginal cost of production - which for software is zero.

I think software's marginal cost of production is zero only if no updates/fixes are ever needed. I know it sounds like nitpicking but just think of all the vulnerabilities discovered, specially the closer software gets to the lower end of the stack. Even if an OS could suddenly stop adding features and still be viable in the market, the cost of patching it as holes are discovered is certainly not zero.


That's not a marginal cost of production, though. It costs almost the same to fix a bug if you have one user or a billion users.

It certainly is a real cost that has encouraged software vendors to move from "box" sales (which became tricky when we gave up on boxes) to annual licenses and/or SaaS. But it's not a marginal cost.


The zero mainly comes from the ability to scale for free. Sure it costs a few million bucks to maintain Linux (or whatever), but when divided by billions of copies the per unit cost becomes effectively zero.


> My contrary opinion is that the actually important feature of the web is the URL, and the important feature of the browser is that it provides a sandboxed platform for running untrusted code without a builtin "walled garden moral police".

Flash tried to do this 15-20 years ago. Flash was basically "here's a canvas, do whatever you want within reason". It had its use cases back then but the general web vastly preferred good old HTML and eventually CSS and JS.

There's a lot of value in having constraints, especially when those constraints (HTML) are specifically designed for displaying documents.


Flash had several problems that should have been addressed differently. Among others:

- it was owned and controlled by a single company and thus a target for other companies instead of other companies helping to improve it

- that single company was unable or unwilling to fix security holes in the sandbox and browser makers eventually got fed up with that

- while there were some ways to interact with the underlying browser environment, ultimately a Flash instance lived in its own little bubble, creating usability problems

Putting the HTML+CSS layout engine into a separate module doesn't mean that it wouldn't remain the standard way of doing document-layouts on the web, it would just mean that higher-level solutions wouldn't need to use hacks and workarounds to map their concepts to HTML+CSS (same way as Javascript had to be used as a less-than-perfect compile target before WASM because it was the only option to run code in a browser).


Those were all problems but none of them had any real bearing on the general population avoiding it for general sites.

The thing is, most people don't want a blank canvas to go nuts with for the general use case. It's too complicated. Suddenly you need to learn entire SDKs and pull in many dependencies just to render text on the screen.

The reason HTML and the web was so well accepted back in the day was all you had to learn were a few HTML tags, a couple of properties and then drop it onto a Geocities site and you were good to go. It was something you can do and see real progress on in 10 minutes with very little background knowledge.

Compare that to now where if you wanted to be a "trendy front-end dev" you would probably reach for Vue / React + Webpack* and go SPA style for all of the sites you make because you learned something and now you want to apply it everywhere. Meanwhile to serve a document such as a blog post, you just went from having 15 HTML tags in a single HTML file and some CSS to pulling megabytes of Javascript sprawled across hundreds of dependencies and now you have build tools, an entire runtime environment (Node) and are using APIs that don't exactly match up with HTML exactly, and then your entire site isn't even crawlable by search engines unless you introduce even more complexity to do server side rendering and before you know it, you're dealing with something that's even more complicated than Flash was back in the day.

* I personally use Webpack to handle my assets too, but I don't write SPAs for sites that are document based (which tbh are most sites).


> The thing is, most people don't want a blank canvas to go nuts with for the general use case.

Are you kidding me? Flash was an absurdly popular platform. Careers were launched on Newgrounds!


> Are you kidding me? Flash was an absurdly popular platform. Careers were launched on Newgrounds!

I'm not saying it wasn't popular, but what do you think the percent spread was between sites that were fully driven by Flash vs not Flash?

I don't have the stats in front of me, but I would guess it was probably 2% Flash vs 98% non-Flash. Now, a lot of sites had sprinkles of Flash but very few sites in the grand scheme of things were solely rendered with Flash. Part of the 2% were sites like 2advanced studios.


> very few sites in the grand scheme of things were solely rendered with Flash

...yeah, because that's not what Flash was about. Flash was an application platform (ok, so more like an Animation platform) first and foremost. Why would anyone run their whole site in flash? It wasn't the right tool for that job.


> Flash was an application platform (ok, so more like an Animation platform) first and foremost. Why would anyone run their whole site in flash? It wasn't the right tool for that job.

And this is exactly what I'm trying to say.

Back in the day you would very likely reach for Flash if you wanted to build an FTP client that ran in a browser. Totally reasonable use case because that's a highly interactive app that doesn't really follow the request / response style of a document style page. But you likely wouldn't reach for Flash to build some type of site that presented data in a document style (pretty much the general use case of what a website outputs).

But if you fast forward to today's technology, some front end developers are bringing in the complexity of creating SPA style applications that are primarily rendered with JS for everything -- even document style websites like a personal blog and the original comment I replied to is hinting they want to remove constraints from browsers to give us more of a FFA style of doing whatever we want to render pages. I was just saying that model didn't really take off for the general use case with Flash decades ago.

Fortunately we still live in a time where web developers can choose what they want to do, but browsers shouldn't change the concept of being something that renders HTML that is optionally combined with other things like CSS, fonts, JS and images.


but most sites had some flash on their site. just like the previous person said: flash was never intended to replace the website entirely - flash was intended to supplement it. and that need went away with html5/css3, which is why it was deprecated as the insecure behemoth it was.


Yet people did this all the time (2advanced, HomestarRunner, etc) because the web as an application (a product) was served particularly well by an application platform.


But for every 2advanced studios there were tens of thousands of people not using Flash as their entire site.

If people really wanted a no constraints platform to build any UI they want with a blank slate then web development as we know it today would be a completely different environment.

Instead, 20 years later, we're still trucking along with HTML as being the primary focus, and everything else can be loaded as needed.

You can build some pretty amazing things today with a combination of HTML, CSS, JS, fonts and websockets. But at the same time, if you don't need an application and just want to serve a HN style site, you can render HTML with a bit of CSS and everything works well. To me, that's one hell of a success story.


> the general population avoiding it for general sites

I'm not sure that happened. My friends and I got on MSN Messenger and shared flash sites all the time. Kongregate and similar gaming sites were part of it, but so was YouTube, and so were greeting card sites, ESPN, various other news sites and many, many others.

At least from my POV, maybe a few people hanging out places like here didn't like flash, but the general population had no problems with it until Steve Jobs did his best to kill it. Even then, people not on iPods/Pads/Phones didn't seem to have any problems with it. AFIK, the fast majority of people still installed it years later.


The case against Flash was, I think, not so much about what it was, but everything around it.

- It wasn't visually scalable. (Today we say "responsive", mostly, but that's too easy to confuse.)

- It didn't integrate with HTML and CSS in any interesting way other than sitting there in the middle.

- Performance was mediocre to terrible.

- A rent-seeking company was in charge of it.

As you say, eventually JS supplanted it.


JS didn't suplanted it from day to night. It was Steve Jobs who killed Flash, not letting Adobe from conquer their new iOS platform. They tried it so hard, even on Android back then, but they failed to deliver something solid. Eventually, both Google and Apple released their SDKs to develop apps, way more solid and faster than running a Adobe Flash player layer on top of the OS. This article shows the jump on technologies, from the CSS perspective http://minid.net/2019/04/07/the-css-utilitarian-methodology/


Flash was already mostly dead by the time the iphone came. Adobe was not investing much into it and it was mainly being used to show videos.


I don't think so. Flash was strong in three scenarios: corporate websites, games and advertising. The agency I worked for made 95% of its revenue from Flash work, and most of the agencies in Barcelona, Madrid did the same.


The case against Flash was (apart from the horrible security issues) mostly about stacked libraries, including Adobe's own UI library. You could code applications with remarkable performance and small footprint using the pure Flash Player target (neither Flash nor Flex, but pure AS3).

Regarding integration, some text elements featured basic HTML capabilities. Nothing terrific, but you could extend on this. (And, of course, XML parsing, but this is another story.) Also, later Flash Player applications (Air) featured a full-fledged WebKit element for rendering web views. That said, it was good thing that Flash had eventually to go, and, at times, the state of the web today is pretty reminding of this.


The general web loved Flash for cases where you needed an application rather than a document. It wasn't appropriate for everything, and it was unfortunately very difficult to mix Flash with general HTML, but in many ways it was better than today's web for both creators and end users.


Second that.

For the user, flash applets where super easy to block in general and to whitelist on pages where you wanted to use that particular flash application. Basically the same was true with Java. And for a certain extend JavaScript -- until the ugly DHTML menus arised and websites stopped to be usable without JavaScript.

Nowadays there are only two options: Either you block JavaScript --- and percieve a 1995's web view, in general --- or you let it run on your browser, having tons of programs running which you probably don't want (ads, tracking, that particular application that replaces flash).


The 1995 view is superior. It loads fast, doesn't hog CPU, never does anything unexpected, and has limited tracking capacity. I cringe whenever I browse without NoScript. If your pages can't render in my sandbox I just go somewhere more sensible.


It's superior only until you want to do something that does actually need JS - sometimes "go somewhere more sensible" isn't an option when you actually need to do something specific (e.g. buy an Amtrak ticket).


> the browser is that it provides a sandboxed platform for running untrusted code

Unfortunately it's not true. Zero day exploits happen all the time, even the recent Intel CPU vulnerabilities were exploitable via JavaScript. Morevever, some harmful effects of arbitrary code execution cannot be mitigated (like the drained battery or user tracking), that's why I have recently adopted a view that code execution on web pages is just a Bad Idea and my non-work browsers are configured with JS turned completely off.

This gives me nearly 2 full days of battery life on my phone and I'm less worried about privacy/security aspects of visiting random web sites.

Unfortunately, even within the last year, I'm noticing that more and more web sites simply serve me a blank screen without JS enabled. This forces me to stop using such sites, which is gradually reducing the size of the usable web for me. My only hope is for a new trendy server-side rendering fad to come along and rescue the Internet :)


I'm aware of at least one project to do that with Ember

https://ember-fastboot.com/#how-does-it-work-


I think you need to take into context the history of browsers vs. operating systems... operating systems were always a platform, while browsers started as consumers of an electronic document standard.

That being said - I agree with your ideal, and I don't think it's far off. HTML as a view engine combined with WASM as a cross-platform runtime can enable truly cross-platform, industry-standard native applications. I think it's only a matter of time until Electron apps morph into cross-platform native apps running off of a thin WASM layer. Once those are available, then the OS vendors will have a huge competitive advantage if they develop fast native WASM runtimes.


The reason browsers use the document as a building block is to provide a consistent and accessible UI to everyone using your site. This is the exact same thing that Windows, Apple, Android, etc do with their UI frameworks that they recommend you use. It also provides the ability for websites to safely degrade on older browsers because unknown elements can still be rendered in the correct hierarchy.

Now in both cases, you can just make your app a fullscreen canvas or opengl/metal viewport or whatever and manually render everything to that while manually handling user input, but you lose interactivity, accessibility, extensibility, etc.


Platforms should provide the necessary low-level APIs to hook into their accessibility- etc... features without having to use their high-level UI frameworks, and they should provide lower-level services like high-quality text rendering.

Mostly this already works, I can open a standard file dialog, or copy data into or out of the system-wide clipboard. If there are any APIs for accessibility missing (e.g. for hooking into screen-readers), I would consider that a bug on that particular platform. There's no reason that applications which use the system UIs should behave any different from applications using custom UIs.


There's no way a platform can provide accessibility for arbitrary pixels you throw onto a screen. You could, by yourself, using the low-level accessibility APIs, but you likely won't since it's a lot of work. Hence the push for using platform UI components: they inform the framework of your intention, and in return this information helps it provide you with better accessibility.


There are ARIA attributes which let you customize the advertised meaning/intent of an element, but there is nothing to, say, mark that a particular location on the page should be considered a button. As far as I'm aware, this matches how Android, Windows, iOS, etc work.

There are so many parts that go into the accessibility of an element, handling tab index, handling keyboard input, handling screenreaders, spoken voice prompting, etc, that a custom solution usually ends up being detrimental to the user's experience.


No one's delivered? There were and are plenty of platforms for full apps on the web. All of them require: loading each time or elevated privileges above the normal browser. This is not the reason they haven't taken off. Security isn't the reason either.

It isn't used because it's just not how the suppliers of content nor consumers of the web want to use the web.

If they want a full featured app, they use it. Even with super fast speeds, latency and minimal startup time still make the browser unattractive for this.

Web users and content providers want linkable documents.

I'd argue that the modern browser stack is so good that we see installed apps adopting html/css/js for the presentation layer.


IMHO the word "installed app" shouldn't even exist. What's the use of "installing" an application these days?

Of course this goes hand in hand with an instant start (no dreaded splash screens please), and not having to download gigabytes upon gigabytes of data upfront.

Modern web apps might be bad at this, and browser engines might be bloated, but native apps aren't any better (even heavy web pages still load many times faster than - for instance - starting Visual Studio or Photoshop).


> What's the use of "installing" an application these days?

Once I "install" an application, for example libreoffice:

- It works even if I do not have a working Internet connection (or back in the day, even if I had removed the original media from the drive);

- It works even if the original is gone.

That is, by "installing" an application, I gain a permanent, offline, working copy of it.


Most of this isn't a given anymore with software vendors going subscription-only like Adobe's Creative Cloud.

On the other hand, PWAs (Progressive Web Apps) also work in offline mode without a "traditional" installation step.


Which is a problem that needs to be pushed back against. SaaS is a step backwards in our relationship with our machines.


> Most of this isn't a given anymore with software vendors going subscription-only like Adobe's Creative Cloud.

Please stop saying this as if it is in any way acceptable. Some parts of the computing world work like this, but this is not, nor should it ever be, the new normal.

You can still definitely use your computer without resorting to such licensed crapware.


Progressive Web Apps that work in offline mode are still installed when first accessed through the browser; they just don't need to go through the "traditional" method (installer, app store).

"Traditional" applications that are subscription-only are only partially installed, if their full functionality requires always-on connectivity.


Not even Steam works like that.

Stay offline for 30 days (iirc) and try to play the games you paid for.


Well, Steam is a DRMed game lending platform; content is only stored off-line because there's no practical way for them to do otherwise. If they could get away with streaming games from their servers, they'd happily do it.


There are many things one could blame Steam for, but AFAIK this limitation has been removed years ago.


What's the issue? There's two that are unavoidable.

The maximum speed of moving data from one physical place to another, moving data from the hard drive to the motherboard and the ram to the cpu/cache is faster than from the internet to a computer. To he speed of light is still noticably slow when you're making round trips to get data.

The other issue is configuration, even if automated, software needs to make decisions about how to deal with different hardware and system differences. This takes time to bootstrap after initial load increase time to usuability or it can be done on the fly slowing down the interactions.


I don't think that is an unpopular opinion, since I am of the same.

I agree with you that the URL is the most important feature of the web. Without it, you could just as well deploy native apps because it wouldn't really make that much of a difference.

Without javascript and a heavy client side, I wouldn't have a job today since the application I build is simply not possible to create with just html+css.

Sure there are many web pages that don't need to be a SPA, but a lot of other webpages do need it. I think an increasing amount of webpages are not simply text documents but we can easily see a move from traditional applications to web applications.

It's obvious why, it is truly a "write once, run everywhere" scenario, it's easy to scale on many different devices and there is little to no advantages to write a native app in most circumstances.

The document web has already been done, just use wordpress or whatever framework to create those sites but the new exciting web for me is the more interactive web where we can seriously move heavy apps into the browser.


What we need, in my opinion, is a new protocol. Leave http(s) for documents and make a new protocol designed from the ground up for interactive applications. That solves the problem of trying to shove an application into a document viewing sub-system and should allow for more creativity and efficiency, depending on what this new protocol turns out to be.

What's happening now is like trying to force ftp to show documents, with markup, and forms, and so forth when it's just supposed to transfer data.


The possibility for HTML documents to support inline scripting was considered even prior to HTML 4[0]. The script tag itself is decades old now, it's supposed to be there and it's as legitimate an element as any other.

The web was never intended to only ever be static, nor is scripting in the browser somehow a corruption or aberration of the web's original purpose. If anything, the web should have been a lot more programmable[1] than it turned out to be.

[0]https://www.w3.org/TR/2018/SPSD-html32-20180315/#script

[1]https://eager.io/blog/a-brief-history-of-weird-scripting-lan...


Maybe that would make it better, maybe not. It's hard to get everyone on the same page working together on a platform that noone ownes. If it was easy, there would have been several protocols in use already.

I am content with what we got and are happy about it. I think it works pretty great after all.


I'm not so happy with it, since I'm tired of trying to figure out which domains to allow javascript from when practically every page has a list of a dozen or more. I'm tired of trying to run pages that don't work with any of the browsers I have access to on linux, despite html being a standard. The whole paradigm seems broken to me. It just feels like a monumental kludge.


> any of the browsers I have access to on linux

I use linux as well and I have yet to find a webpage that doesn't work with Firefox and Brave.

Maybe you shouldn't block javascript and instead block trackers and ads. There are a lot of tools to do that like uBlock Origin, pi-hole, privacy badger etc.

Javascript is used by pretty much all websites today to do mostly other stuff than tracking. Why are you blocking it if I may ask?

Also, I am very curious (I really am), what websites do you visit daily? I'm sure at least some of them are depending on javascript and couldn't be solved with html+css alone?


I've had a few sites that wouldn't work for me without booting up my windows laptop and running edge. One of them is the place I make car payments.

I use uBlock Origin at home. I use NoScript at work. I like it because it blocks ads and because it does break much of the web. I don't want auto-playing videos, I don't want an enhanced experience, I just want to find the information I'm looking for. If the website I go to won't show me anything even after enabling the main domain and a CDN, then I move on to the next choice. Enabling js on the main domain usually makes menus and such work. The rest seems to be ad-related, from what I can tell.

There are a handful of sites where I really am looking for an enhanced experience. Mainly my banking sites, sites I frequent for shopping, and some forums. Most other places I visit I'm happy that the site is mostly broken as long as I can see what I went there to find. In many cases, I'd be happier with a 90's era gray page with text and pictures. But then I'm old.


Ok cool to hear your story, thanks for sharing it.

If you want, I can tell you some of the apps I use. I use Slack, Telegram, Visual Studio Code, Discord, Tidal (as a PWA) and a bunch of other web apps.

Some apps I really do require to be fancy, like fastmail. Using it with normal forms and no ajax would suck imo.

I am myself also building an app where we heavily rely on a map and rendering stuff upon it. It is very interactive, and it would not be possible (at least to the extent that we want to deliver it) to make it without javascript.

I also think most of the "desktop" apps I heavily use wouldn't really exist for linux if electron didn't exist which is kind of sad.

But now they do and they work wonders, so good I use them every day. You can barely tell that Discord or Visual Studio Code is web apps nowadays since they work so damn good, better than most hacky open source alternatives that exist for linux.

I think it's cool that you and some other people can have a functional user experience without javascript but for me it's simply not possible if I want to enjoy all the benefits that running with javascript enables me.

I actually rather use web apps, progressive web apps etc than install a native app that can ruin my entire computer. They are equally fast nowadays, web apps often get updates a lot faster and it isn't a risk to update the app. Just refresh the app and everything is safe and sound, especially with PWAs.

Just check this screenshot, it even looks native (and hardware keys work fine):

https://i.imgur.com/OeOaMPt.png


> [not] block javascript and instead block trackers and ads

That sounds like it would be great if it were even possible. What about the million times that someone relies on ajax.googleapis.com and forces me to either allow it or walk away? Am I really supposed to believe that Google isn't logging my visit, each and every time I get a script from there to restore the functionality to or even put text into someone's page?

And then, what will we do about any original domain owners (hypothetically) cooperating with them by hosting such scripts in order to go under the uBlock radar and keep getting paid? What about sites that intentionally host and run whatever malicious scripts themselves? Right now it is merely convenient that we can often choose (correctly) to run or block based on URLs alone, and I don't expect that to last.

I just admitted to everyone that if there are blocking mechanisms which work around other bits besides URLs, I haven't heard of them. I just admitted to myself that I have no real reason to think that it's even enough, and I might be just punishing myself and making everything needlessly difficult-- for my own little privacy theater.

I suppose that unless uBlock or Pi-Hole can (someday) analyze every script in every page (hopefully in sub-second time), and determine whether it is cosmetic or mandatory or exploitative or malicious based on nothing but the code and the context, we lose anyway.


Sure but what about sites that are hosted on the GCP? Google will surely log that visit as well and there is no easy way for you to tell.

I think privacy is important, but to a certain extent. I think I am pretty tin foiled when it comes to integrity. I don't use any social networks, I don't publish my images and location to everyone. I use services that encrypt my information like SpiderOak for backups and I use encryption on my disk.

That said, some things ought to be allowed like loading a crappy jquery file from google cdn. Sure they will log that visit, but comon that is just like visiting a GCP site. The same amount of logs can be made from that.

I think you provide some valid points, but I think your fears are a bit too far fetched. I also fear Google and Facebook. I use DuckDuckGo as my main search engine and I try to not use Google products. I don't think you can do so much more than that to be honest.


Don't we already have that with TCP and/or UDP ?


Isn't it pretty hacky to use a Web browser for that instead of a dedicated virtual machine ?


For what exactly?


That view of the browser is what I want to have. It blows my mind that we still have to tell people “don’t click links in your email” because it’s still possible for something terrible to happen if they do.


> for something terrible to happen if they [click links in their email].

There are very few RCEs now on modern browser (they still exist[0,1], though). The main threat is phishing, because users are gullible (and should be educated).

[0] https://www.mozilla.org/en-US/security/advisories/mfsa2019-1...

[1] https://objective-see.com/blog/blog_0x43.html

And: https://news.ycombinator.com/item?id=20283922


You don't need to exploit a browser when Adobe still can't fix the holes in Reader and corporate users are still heavily dependent on Outlook.


The real question is why should links in an email message be clickable? Restricting email messages to plain text would make it much more difficult for attacks like that to succeed.


URLs are plain text... Can also easily be clickable, depending on your e-mail software.


They are, but some email clients will display text that doesn't match the original URL. Hence the advice to hover the mouse pointer over the link and tread the original URL text that way.


If you use plain text mode they won't be able to. Hence the advice to disable HTML in e-mails.


... and make emails completely useless.


Ever head of copy & paste?

Won’t make them “useless”, maybe inconvenient for non-security minded individuals.


if You make emails inconvenient for non-security minded people by removing something as important as links, you will push mainstream users to move away of the medium (because other way to communicate will definitely support links and not force people to copy paste), which doesn’t help or solve anything.


Imagine that in the 90s, a "URLs only" web took off -- not too hard to envision, really, considering how many sites there were that were giant slabs of sliced images with hotspots for links between, or how Sun was pushing the vision of URLs essentially indexing Java client apps.

If that had ended up as the dominant paradigm... Google (or a search engine like it) wouldn't exist.

A URLs-only web, one that sees the browser as nothing more than The VM That Lived™, is actually the more limited vision.


> My contrary opinion is that the actually important feature of the web is the URL,

As others have pointed out, that's hardly a contrary opinion.

To quote Roy Fielding's Dissertation (p 109).

"Uniform Resource Identifiers (URI) are both the simplest element of the Web architecture and the most important."

https://www.ics.uci.edu/~fielding/pubs/dissertation/fielding...


> As others have pointed out, that's hardly a contrary opinion.

Opinions are almost universally contrary, despite the knowledge that the URL can be a tool. How often is it put forth that the URL is meant to be indempotent or the URL is supposed to be mapped to another routing system etc. The number of commercial sites where the URL is a field you are meant to manipulate (other than say search engines and even they don't point it out for you) is so small as to be significant.


ChromeOS is pretty close to what you are talking about... yeah, it's still an App store, but IIRC, you can add an "app" from any URL. It would be nice to see a more open version of what Adobe AIR tried to do, but with node/electron at the core. Unfortunately $VERSIONS makes it difficult to support well with a common runtime, but would still be well worth seeing.

I don't mind Electron too much, current (for 5+ years or so) desktops support running a handful of them without issue (provided at least 8gb ram).

I think that once a few quirks are worked out, we may well see a "one runtime to rule them all" with WebAssembly. WASM + Sandboxed FS + Canvas/WebGL could go a VERY long way towards a lot of application use cases. There's been a lot of work underway for this type of thing and it could be very interesting to say the least. I do think we'll wind up with an XML based markup for UI over the top, maybe a ReactNative-like syntax for WebAssembly+Canvas or WebGL would be very cool.


It would be nice if operating systems would be exactly that, but for some reason "commercial" OS vendors are not able or willing to provide a safe "native" sandbox completely uncoupled from curated app stores.

Apple is going down this route with code signing and granular permissions with Macintosh applications that don’t have to be part of the Mac App Store and there is still whining from geeks that this is leading to a walled garden, even though there are approved ways to work around it.

It's the same with Electron apps by the way, those wouldn't exist if operating systems would come with cross-platform UI frameworks that are actually better than the unholy mix of HTML+CSS+JS. But Microsoft, Apple etc... had decades to get their act together without delivering.

One of the problems with Electron is that it’s the same interface everywhere. I don’t want the same interface on a Mac that I have on Windows.


I'm sorry but you have to be special kind of delusional to think apple is doing something to move away from walled garden lol.

They haven't done it because it doesn't bring any benefits to the greedy corps, and apple is the worst offender with their dumbed down, shackled os


Did you read the post or are you just responding to your own prejudices? The parent post said that OS vendors should have sandboxes without an App Store. Isn’t Apple doing exactly that with the Macintosh? The Mac App Store is optional and despite all of the conspiracies, it was introduced over 8 years ago and you can release an application on the Mac just as easily today as you could then.

So how is MacOS - a real certified Unix OS “dumbed down”?


Well, I remember people warning that having an official Android Market would make distributing via side loading not a viable option... Lo and behold, these days Fortnite gets criticized for being delivered via an .apk because it's "unsafe" !



Something that has nothing to do with sideloading or even Android?


Wrong link above:

https://www.cnet.com/news/fortnites-battle-royale-with-andro...

A storm of security issues is closing in on the Android version of Fortnite. And it isn't likely to pass anytime soon.

Developer Epic Games just fixed a security flaw with Fortnite's installer for Android devices, but researchers are expecting a flurry of problems for the online game as it gets more popular on Android.

That's because Fortnite isn't available through Google's Play Store. Epic instead chose an unorthodox -- and more dangerous -- route for the game's fans. Rather than download it through the official Google app store, players need to download the game and "sideload" the app on their Android devices instead.


Couldn't agree more, html should be just one implementation amongst others. It's good for documents, it's lousy for richer experiences. The rich experience shouldn't be built on top of HTML, it should live next to it.


Indeed, a browser should be just a sandbox, nothing more.

All the HTML+CSS+JS stuff only makes browsers difficult to secure, and also causes bloat and performance issues when they are not needed.

Furthermore, all these hairy web specifications cause compatibility issues, and are a barrier to entry for new browser makers.

If people want HTML or CSS or JS, then they should implement these inside the sandbox. In practice, this would mean that developers would simply include one of the standard libs for rendering HTML, etc., and these standard libraries can be cached by the browser, so their use does not cause performance/bandwidth penalties.


I agree. If we want an application platform, let's have something designed first and foremost to be an application platform instead of the garbage fire we've bolted to the skeleton of HTML.

Here's how you do it: create the new platform and its spec. Include an open 'document rendering engine' to replace that portion of the web. For backwards compatibility, go ahead and port a web browser to the new platform. For forwards compatibility, port the new platform to the current web.


Ok, but that isn't "the web."

You are completely free to write and launch a new app that does this today. Just don't try and shoehorn it into my web browser.


> Ok, but that isn't "the web."

It's "the sandboxed, browser agnostic web".


I'm not sure that it can be called "the Web" if it doesn't have directly accessible pages linked by hypertext, instead having a single page/app for instance (even if it runs on the top of the HyperText Protocol...) We're getting into non-Web Internet territory here...


The web browser _is_ the "safe" sandbox. Your dream of competing OS vendors coming together to build a standard UI platform makes no sense whatsoever. The current state of web apps is mostly a factor of developers being willing to do any amount of work whatsoever to avoid having to learn how to do things differently than they already know (ie, writing good native apps for multiple platforms, which would require learning multiple frameworks, so instead they create their own abstracted framework within a sandbox.


> It would be nice if operating systems would be exactly that, but for some reason "commercial" OS vendors are not able or willing to provide a safe "native" sandbox completely uncoupled from curated app stores.

Windows 10 comes with a VM-based application sandbox. But it's meant for powerusers and has to be explicitly enabled

https://www.windowscentral.com/how-use-windows-sandbox-windo...


This. absolutely this! We didn't get the crossplatform UI framework we needed, we got the one we deserved. SwiftUI looks amazing - but again - apple only. React looked promising, but mainly for desktop, and it's not without its own issues.

I think the W3C are a big part of why the web works so well cross-platform... even though the web was supposed to be "so much more". It's quite hard to imagine a similar kind of body for cross-platform UI frameworks.


If I remember Xamarin can be used to target all existing platforms, even using native looks and feel. In practice I don’t think I’ve seen it in the wild. Maybe it’s being used by Microsoft for crossplatform applications? (E.g: word, excel, OneNote, etc)


What makes React inherently un-mobile except the fact that all browsers have a touch timeout thus making UI seemingly slower?


Don’t know if it’s unmobile per se. But it’s heavy, JS wise, tends to promote SPAs, and can be quite tough to get to play well with a hybrid. The react team seems to put their effort in React Native. And that might the only framework I’d never ever ever “touch” again - pun intended:) We do use react for some mobile stuff, the issue seams to be that you still need a lot of other stuff and your app/UX still feels a bit like a 2nd class citizen


Have you ever written anything in Qt?


I've recently been using it and I am very disappointed. Some tasks that should be totally trivial are incredibly hard to get right and require platform specific solutions. Nothing unsolvable, but I was expecting a much more straightforward experience from a product that has been promoted as the cross platform toolkit since the 90s.


I co-developed a Qt C++ app of ~40,000 lines. I think the amount of platform-specific code was at most a couple of pages (handling macOS' behavior to keep an app alive when all its windows are closed).

It is not perfect, but it feels a lot more native on every platform than virtually any web or electron app I have seen (including predictable keyboard shortcuts).


I don't have a large amount of experience but when I worked with Qt I was blown away with how friendly and cross-platform it all was. It's pretty close to write once, compile to anything.


I have not. I’ve looked at it quite a bit, and it looks interesting.


Apple planned to use web apps for iOS but developers wanted bare metal performance, and users also want bare metal performance. So dont blame the platforms, blame users and developers. As a Linux user you can however restrict what the app can do using Apparmor.


Well, beside formatting, html is <a href="...">


This is the kind of thing that drove me away from frontend development a few years back. I explicitly ignored the learn-the-javascript-framework-du-jour attitudes.

On a project where I was doing frontend work, I just tried to think about the semantics of html elements and http requests, and reason my way through the tasks from first principles. I really enjoyed doing that actually, and don't regret it. But it didn't teach me angular which is what the frontend people at that company were using at the time, or probably any of the other ones.

I hope to venture into frontend land again, because I think it's a valuable skillset, but I hope the next time I do things will have settled into something a bit more sane. When I find a layer of a technology stack that seems to have been misused or misunderstood, my tendency has been to stop at that point and dive into a rabbit hole trying to understand what is going wrong and how to fix it.


So, frontend tooling is freeing developers from mundane tasks and cognitive overhead at the cost of not strictly following standards and recommendations. The tooling could be improved to more closely follow standards, but maybe those standards ought not be followed? Maybe the standards need to evolve with real world use. DIV has broader applications than what a committee envisioned. Sites are running, business is accomplished, goals are met while not strictly adhering to antequated standards.

If I make a tool with an intended use but the world uses it in another unanticipated way, and they're happy about doing so, then that is a very fortunate accident. I then shape the next iteration of my tool around real world use.


Someone I met recently met who delegates work to web developers for a living had this to say about JavaScript frameworks: They are very seldom necessary for a job, and their main function is to create more work for more developers to get paid.

As an “old school” designer who writes HTML and CSS by hand, with JavaScript for progressive enhancement, I found myself agreeing, and it reminded me of this, from back in 2001: https://www.thenoodleincident.com/tutorials/box_lesson/why.h...

> The idea of HTML et al is a document markup language that would grow. Its architects saw we were going digital and sat back and took a long view. A very long view. They laid the foundations for a language that would work with all the conceivable technology of the time, and would be expandable to the unconceivable technology that would follow. So that documents would never be unretrievable due to age. Ever. A browser in 2050 would be able to read a 1994 document. And in 2094. And so on. They made a stand of cultural importance to the world.

The slow, less usable feeling of JavaScript-heavy websites — my current personal website included — has remained a constant, and I mourn the reduced usability of View Source as a map for learning, lost in the forest of DIVs and obfuscated JS in much of today’s World Wide Web.


I'd argue the opposite point, the problem with static websites is that they were bad at capturing state. That is what lead to the bloat of JQuery messes in websites and inefficient parsing of the HTML tree. Now that developers have so many platforms to target in so many ways, having a consistent way of state to permeate through websites reduces work.

Don't take me wrong, I love crafting my website 20kb of CSS over static files. Always sign me up for a static site, but for complex websites, where interactivty and state are needed, it's important to have these frameworks.


Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: