Every negative thing said about the web is true of every other platform, so far. It just seems to ignore how bad software has always been (on average).
"Web development is slowly reinventing the 1990's."
The 90s were slowly reinventing UNIX and stuff invented at Bell Labs.
"Web apps are impossible to secure."
Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure.
"Buffers that don’t specify their length"
Consumers don't think about security the way an IT professional does. A programmer thinks of all the ways that a program could fuck up your computer; it's a large part of our job description. The average person is terrible at envisioning things that don't exist or contemplating the consequences of hypotheticals that haven't happened. Their litmus test for whether a platform is secure is "Have I been burned by software on this platform in the past?" If they have been burned enough times by the current incumbent, they start looking around for alternatives that haven't screwed them over yet. If they find anything that does what they need it to do and whose authors promise that it's more secure, they'll switch. Extra bonus points if it has added functionality like fitting in your pocket or letting you instantly talk with anyone on earth.
The depressing corollary of this is that security is not selected for by the market. The key attribute that customers select for is "has it screwed me yet?", which all new systems without obvious vulnerabilities can claim because the bad guys don't have time or incentive to write exploits for them yet. Somebody who actually builds a secure system will be spending resources securing it that they won't be spending evangelizing it; they'll lose out to systems that promise security (and usually address a few specific attacks on the previous incumbent) . And so the tech industry will naturally oscillate on a ~20-year cycle with new platforms replacing old ones, gaining adoption on better convenience & security, attracting bad actors who take advantage of their vulnerabilities, becoming unusable because of the bad actors, and then eventually being replaced by fresh new platforms.
On the plus side, this is a full-employment theorem for tech entrepreneurs.
I'm not sure programmers are much better. There's a long history of security vulnerabilities being reinvented over and over. Like CSRF is simply an instance of an attack first named in the mid 80s ("confused deputies"). And why are buffer overflows still a thing? It's not like there's insufficient knowledge about how to mitigate them.
And blaming this on the market is a cheap attempt to dodge responsibility. If programmers paid more than lip service to responsibility, they'd push for safer languages.
If programmers paid more than lip service to responsibility, the whole dumb paradigm of "worse is better" would not exist in the first place. As it is, we let the market decide, and we even indoctrinate young engineers into thinking that business needs is what always matters the most, and everything else is a waste of time (er, "premature optimization").
I used to think like this but I've come to realize that there are two underlying tensions at play:
- How you think the world should work;
- How the world really works.
It turns out that good technical people tend to dwell a lot on the first line of thinking.
Good sales/marketing types on the other hand (are trained to) dwell on the second line of thinking and they exploit this understanding to sell stuff. Their contributions in a company, in general, are easier to measure relative an engineer since revenue can be directly attributed to specific sales effort.
"Worse is better" is really just a pithy quote on how the world works and it's acceptance is crucial to building a viable business. Make of that what you will.
How many hacks, data breaches, and privacy violations does it take for consumers to start giving a shit?
Also, any programmer will tell you that just because an issue is tagged "security" doesn't mean it will make it into the sprint. Programmers rarely get to set priorities.
There's a quote by Douglas Adams pops up in my mind whenever the subject comes up:
> Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.
This is the only explanation there can be for this. Every time there's a breach somewhere (of which there obviously are plenty), there's a big outrage. But those who should go "oh, could that happen to us, too?" choose to ignore it, usually with hand-waving explications of how the other guys were obvious idiots and why the whole thing doesn't apply to them.
This obviously goes for consumers and producers.
In other words, it takes a better alternative to exist. Better can mean cheaper or faster or easier, a lot of things. That can be accelerated by the economic concept of "war" (ie. any situation that makes alternatives a necessity).
The incentives for someone to break into a major retailer, credit card company, or credit bureau are much different from Widget Cos. internal customer service web database. What I think the article is missing, even though it makes alot of good points, is that if there's a huge paycheck at the end of it, there will always be someone trying to exploit your system no matter how well designed it is. And if they can't hack the code quickly, they'll learn to "hack" the people operating the code.
You are oversimplifying. Dunno in what programming area you work (or if it's software at all) but "we work with languages X and Y" is something you'll find in 100% of all job adverts.
Tech decisions are pushed as political decisions from people who can't discern a Lumia phone from an average Android. That's the real problem in many cases.
That there exist a lot of irresponsible programmers is a fact as well.
It used to be that RandomBusinessApp would hit this stuff, now most of it ends up in Java so it might still crash but usually it's mitigated better.
Most programmers want to dio their job quickly and easily, and go home.
people really thought activex was brilliant...until security became an issue. i can remember when the tide changed.
anyway, fair points otherwise. cheers.
Another advantage is that they are inherently available across OSes, usually across different browsers (but we know what it takes.)
Finally, they used to be much more easy to develop.
Tldr: larger audience, less costs.
The true definition of a full stack developer in those days would make today's definition of full stack faint.
You had to know how to setup hardware with an os with your software and databases, often having to run your gear in a datacentre yourself that you had to figure out your own redundancy for, all for the opportunity to code something to try out. Being equally competent in hardware, networking, administration, scaling and developing a web app was kind of fun. Now those jobs are cut into many jobs.
Activex was what flash tried to be.. The promise of Java of using one codebase everywhere.
Seeing webassembly is exciting.
This happens in other areas besides applications as well. Programming languages, operating systems. This leads to an eternal re-invention of the wheel in different forms without ever really moving on.
I refer to these as "unstable industries" - they all exhibit the dynamics that the consequences of success undermines the reasons for that success in the first place. So for example, the key factor that makes an editor or new devtool popular is that it lets you accomplish your task and then gets out of the way, but when you've developed a successful editor or devtool, lots of programmers want to help work on it, they all want to make their mark, and suddenly it gets in your way instead of out of your way. For a social network, the primary driver of success is that all the cool kids who you want to be like are on it, which makes everyone want to get on it, and suddenly the majority of people on it aren't cool. For a review site, the primary driver of success is that people are honest and sharing their experiences out of the goodness of their heart, which brings in readers, which makes the products being reviewed really want to game the reviews, which destroys the trustworthiness of the reviews.
All of these industries are cyclical, and you can make a lot of money - tens of billions of dollars - if you time your entry & exit at the right parts of the cycle. The problem is that actually figuring out that timing is non-trivial (and left as an exercise for the reader), and then you have to contend with a large amount of work and similarly hungry competitors.
We started out with OS threads (I guess processes came first but whatever) and now we're trying to figure out what the next paradigm should be. It looks to me like it's Hoare (channels, etc) for systems programming and actors for distributed systems, both really really old ideas. To be fair there are other ideas (STM, futures, etc) that fill their own niches, but they either specialize on a smaller problem (futures) or they're still not quite ready for popular adoption (STM). If this is cyclical then I think we're pretty early in the first cycle.
Sure, the spotlight moves from one model to the other and back, but that's because the hype train cannot focus on many things at the same time, not because the ideas go out of style.
Only if it is open source. Seems like Sublime Text (just an example) has avoided this effect... perhaps evidence that open source is not the best model for every kind of software?
There's a flip side to everything. In this case, if you "fixed" this problem, it would imply a steady-state world where nothing ever changed, nothing was ever replaced, and nobody could ever take action to fix the things bugging them. To me, this is the ultimate in dystopias. It's like the world in The Giver or Tuck Everlasting, far more oppressive than the knowledge that everything we'll ever build will eventually turn to dust.
Or we could get rid of humans and let machines rule the earth? Actually, that wouldn't work either, these dynamics are inherent in any system with multiple independent actors and a drive toward making things better. If robots did manage to replace humans (ignoring the fact that this is already most peoples' worst nightmare), then the robots would simply find that all their institutions were impermanent and subject to collapse as well.
Remember that in some areas, the web is far, far more advanced than software development was in the 90s. It's not unheard of for web companies to push a new version every day, without their customers even noticing. At my very first job in 2000, I did InstallShield packaging and final integration testing. InstallShield had a very high likelihood of screwing up other programs on the system (when was the last time Google stopped working because Hacker News screwed up the latest update?), because all it does is write to various file paths, most of which were shared amongst programs and had no ACLs. So I'd go and stick the final binary on one of a dozen VMs (virtualization was itself a huge leap forward) where we could test that everything still worked in a given configuration, and try installing over a few other applications that did similar things to make sure we weren't breaking anything else.
We never did ship - we ran out of money first - but typical release cycles in that era were around 6 months (you still see this in Ubuntu releases, and that was a huge improvement on programs that came before it).
And this was still post-Internet, where you could distribute stuff on a webserver. Go back another decade and you'd be working with a publisher, cutting a master floppy disk, printing up manuals, and distributing to retail stores. You'd have one chance to get it right, and if you didn't, you went out of business.
The thing is, many of the things that made the web such a win in distribution & ubiquity are exactly the same things that this article is complaining about. Move to a binary protocol and you can't do "view source" or open a saved HTML file in a text editor to learn what the author did; programming becomes a high priesthood again. Length-prefix all elements instead of using closing tags and you can't paste in a snippet of HTML without the aid of a compiler; no more formatted text on forums, no more analytics or tracking, no more like buttons, no more ad networks (actually, I can see the appeal now ;-)). Require a compiler to author & distribute a web page and you can't get the critical mass of long-tail content that made the web popular in the first place.
You can see the appeal of all of these suggestions now, in a world where things have gotten complicated enough that only the high priesthood of JS developers can understand it anyway, and we're overrun with ads and trackers and like buttons that everyone has gotten tired of anyway, and a few big companies control most of the web anyway. But we wouldn't have gotten to that point without the content & apps created by people who got started by "view source" on a webpage.
My concern, as readers who have seen some of my other HN comments may guess, is that the next time someone starts over, they'll neglect accessibility (in the sense of working with screen readers and the like), and people with disabilities will be barred from accessing some important things. "How hard can it be?", the brave new platform developer might think. "I just have to render some widgets on the screen. No bloat!" It's hard enough to make consistent progress in this area; it would help if there were less churn.
Edit: I guess what I (very selfishly) wish for is steady state on UI design and implementation so accessibility can be perfected. I know that's not fair to everyone else though. Other things need improving too.
I disagree with that. Using binary formats to exchange data between programs doesn't preclude using textual formats at the human/machine boundary. Yes, "view source" needs to be more intelligent than just displaying raw bytes, but that is already the case with today's textual formats. Everything is minified and obfuscated, so the browser dev tools already have to include a "prettify" option. Moving to a binary protocol would turn that into "decompile" and make it mandatory, but it effectively already is.
Requiring a compiler to author and distribute a web page is no different than requiring a web server or a CGI framework or the JS-to-JS transpiler du jour. It adds another step in the pipeline that needs to be automated away for casual users, but that's manageable. Even if the web world moves to binary formats (as WebAssembly seems to indicate), your one-click hosting provider can still let you work with plain HTML/CSS/JS and abstract the rest; just like it abstracts DNS/HTTP/caching/whatever.
This will be a legal problem. At least in my jurisdiction, transforming source code (which is what prettifying is) is not subject to legal restrictions, but decompiling binary machine code into readable source code is forbidden by copyright law. (For the same reason, I'm concerned about WASM.)
That one single goal we all share and agree on, and know exactly how to get to so progress can be steady and incremental and continuous?
You strive for excellence
You keep improving
Like Jiro did with sushi
And then the product dies with you
Indeed. And then we made sure all interesting data (email, business data, code (github/gerrit etc)) was made available to the Web browser - so pwning the computer became irrelevant.
It's indeed like the 90s - from object oriented office formats, via macros to executable documents - to macro viri - and total security failure. Now we have networked executable documents with no uniform address-level acl/auth/authz framework (as one in theory could have on an intranet wide filsystem).
So, yeah, I kind of agree with the author - we're in a bad place. I used to worry about this 10 years ago, by now I've sort of gotten used to the idea, that we run the world on duct tape and hand-written signs that says: "Keep out - private property. Beware of the leopard.".
Unfortunately, this is not entirely true. There were bugs in image processing, PDF processing (some browsers would load it without user prompting), Flash, video decoders, etc. IIRC even in JS engines, though those are more rare. Of course, you could go text-only, but then you couldn't properly access about 99% of modern websites.
But downloading an EXE is basically allowing arbitrary code execution on your machine no matter what. So _even with the security bugs_, webapps are basically safer than installing a native app on desktop, at least in its current state.
I see your point though. There are still a lot of entry points we need to be careful about
> The 90s were slowly reinventing UNIX and stuff invented at Bell Labs.
Yes, this reminds me of: "Wasn't all this done years ago at Xerox PARC? (No one remembers what was really done at PARC, but everyone else will assume you remember something they don't.)" 
> "Buffers that don’t specify their length"
Most injection attacks are due to this; if html used length-prefixed tags rather than open/close tags most injection attacks would go away immediately.
That's not really the problem. The problem is there is no distinction between data and control leading to everything coming to you in one binary stream. If the control aspect would be out-of-band then the problem would really go away.
Length prefixes will just turn into one more thing to overwrite or intercept and change. That's much harder to do when you can't get at the control channel but just at the data channel. Many old school protocols worked like this.
This is the important takeaway here. Changing the encoding simply swaps out one set of vulnerabilities and attacks for another. Separating control flow and data is the actual silver bullet for this category of attacks.
Unfortunately, there’s rarely ever a totally clear logical separation between the two. Anything you want to bucket into “control”, someone else is going to want the client to be able to manipulate as data.
Granted, if you made that control channel stateful, you'd make a lot of problems go away. But you could do that with a combined control/data stream too.
What am I missing? How would an out-of-band control channel make things easier?
That said, I think many issues with the web could be solved by implementing new protocols as opposed to shoehorning everything into HTTP just to avoid a firewall...
So <html>abc</html> would go as
<html><datum 1></html> where datum 1 would refer to the first datum in the data stream, being 'abc' and no matter what trickery you'd pull to try to put another tag or executable bit or other such nonsense in the datum it would never be interpreted. This blocks any and all attacks based on being able to trick the server or eventual recipient browser of the two streams to do something active with the datum, it can only be passive data by definition.
For comparison take DTMF, which is inband signalling and so easily spoofed (and with the 'bluebox' additional tones may be generated that unlock interesting capabilities in systems on the line) and compare with GSM which does all its signaling out-of-band, and so is much harder to spoof.
The web is basically like DTMF, if you can enter data into a form and that data is spit back out again in some web page to be rendered by the browser later on you have a vector to inject something malicious and it will take a very well thought out sanitation process to get rid of all the possibilities in which you might do that.
If the web were more like GSM you could sit there and inject data in to the data channel until the cows came home but it would never ever lead to a security issue.
No amount of extra encoding and checks will ever close these holes completely as long as the data stays 'in band' with the control information.
I could easily see making <script> and <link> resources required to be separately requested (like images are now -- ignoring data/base64 resources), but we're back to redefining HTML.
I'm not arguing against that...
It's really hard to have these types of debates though, because everyone focuses on different problems of the HTTP/HTML webapp request/response cycle. Like you said, adding separate control/data channels would help, but that doesn't solve SQL injection attacks (which is a whole other class, but that's not really an HTTP/HTML issue, it's a backend issue and I don't see how you'd avoid that with a simple protocol change). Simply making HTTP stateful could potentially solve a different class of session highjacking, etc...
There are so many attack vectors that I think it does make sense to think about what a replacement for HTTP/HTML would look like. Most of these problems arise from trying to re-engineer a document format (HTML) to support interactive webapps. We should think about how to do this better... (without recreating ActiveX -- shudder).
This has been implemented in HTTP (not HTML); you can enable the requirement right now by serving your pages with an appropriate Content-Security-Policy header.
(html "This is not (malicious \"boo\")")
(4:html29:This is not (malicious "boo"))
(html "user content")
user content := " (script "something malicious")"
(html "" (script "something malicious"))
Or, run your data through stored procedures instead. It took me a while to figure out why stored procedures were so much more secure than regular queries. I finally figured out it was because a stored procedure does exactly what the grandparent post says: It treats all inputs as data with no possibility to run as code.
Perhaps the most naive example: https://pastebin.com/acQqhDvy
I think they're more useful for organization and abstraction than security. Then again, a well organized and smartly abstracted system can lead to better security!
But I think bind parameters are probably a better example of security.
Binding effectively separates the data from the logic. So you define two separate types of things, and then safely join those things together by binding them. It doesn't matter too much whether that happens in the application making a call to the database or in the database in a stored procedure. Obviously this same concept can be applied at many different points along the application stack. The analogous concept in the UI is templating. You define a template and then safely inject data into that template.
This isn't well defined. Take this pseudocode stored procedure (OK, it's a python function):
if user_input == 1:
elif user_input == 2:
return "Go away."
The security gain from a stored procedure, on this analysis, is not that it won't run user input as code. It will! The security gain comes from replacing the full capability of the database ("run code on your local machine") with the smaller, whitelisted set of capabilities defined in the stored procedure.
The security gain is that it you are only able to run queries that the DBA allows you to. If you can't write arbitrary queries, you won't get arbitrary results. If you can only run a stored procedure, you are abstracted away from those side effects. Another way of saying this -- the security risk is shifted from the app developer to the DBA. Someone is still writing a query (or procedure code), so there will always be some risk.
This could also be achieved with a well written microservice/package that developers go through without depending on dba.
String escaping SQL? How is anyone thinking that is still a thing in 2017? The problem has been solved for two decades
* At least with .Net/Entity Framework/Linq you mock out your dbcontext and test your queries with an in memory List<>
> harder to unit test
Disagree. I've implemented unit tests that connect to the normal staging instance of our database, clone the relevant parts of the schema into a throw-away namespace as temporary tables, and run the tests in that fresh namespace. About 100 lines of Perl.
That was five years ago. These days, it's even easier to do this correctly since containers allow you to quickly spin up a fresh Postgres etc. in the unit test runner.
It also need not be correct. If you're only ever doing "SELECT * FROM $table WHERE id = ?", you're fine, but a lot of real-world queries will use RDBMS-specific syntax. For example, from the top of my head, the function "greatest()" in Postgres is called "max()" in SQLite. How is it called in your mock?
Mocking out tables with in-memory lists adds a huge amount of extra code that's specific to the test (the part that parses and executes SQL on the lists). C# has this part built in via LINQ, but most other languages don't.
By the way, I see no practical difference between "in-memory lists" and SQLite, which is what I'm currently using for tests of RDBMS-using components, except for the fact that SQLite is much more well tested than $random_SQL_mocking_library (except, maybe, LINQ).
The way that Linq works and the fact that it’s actually compiled to expression trees at compile time and that the provider translates that to the destination at runtime whether it be database specific SQL, MongoQueries or C#/IL, does make this type of testing possible.
If this was the case, it would be near-impossible to write HTML by hand. And if you're writing HTML with a tool (React, HAML etc.), the tool could be doing HTML escaping correctly instead. This isn't an issue with HTML, it's an issue with human error.
All security issues are due to human error. Those are solved by building better tools.
> If this was the case, it would be near-impossible to write HTML by hand.
If, besides the text form, there would be a well-defined length-prefixed binary representation, we could simply compile HTML to binary-HTML, which would immediately made the web not only safer, but also much more efficient (it's scary if you think just how much parsing and reparsing goes on when displaying a web page).
My point is that there's nothing wrong with HTML. HTML isn't a tool, it's a format for storing and transmitting hypertext. If you're using React or HAML or any of the other HTML-generating tools, you're effectively immune from XSS. I'm putting forth that developers aren't using effective tools (shame on every templating engine that doesn't escape by default), and that calling the web as a platform bad is a bit nonsensical. It's like saying "folks are writing asm by hand and their code has security issues, therefore x86_64 is insecure".
However, no such tool exists. I think there's a deeper issue here: the sheer number of ways you can generate XSS alone, even ignoring the other exploit types, is far beyond what any tool is capable of stopping. Look at one of the XSS holes found by Homakov that I linked to from my article:
Show me the tool that would have avoided that type of exploit, without already knowing about it and having some incredibly specific hardcoded static analysis rule.
When I argue that the web is unsafe by design, it's because cases like that aren't rare, they're common. To paraphrase Veekun, scratch the surface of web security and you'll find yourself in a bottomless downward spiral, uncovering more and more horrifying trivia.
I think you're missing another two obvious explanations:
1. Lack of education when picking a tool (copy paste from bad SO answers is a frequent source of bad code).
2. Developers don't care. If it works, why bother wrapping your head the rest of the way around to understand why it works or whether it's secure?
> By itself, it is not an XSS. But if the backend is/was running Ruby on Rails (presumably some old version by now) then it could turn into an XSS due to a combination of features that all look superficially harmless.
Sure, ERB before RoR essentially had security turned off by default (as I noted). And this issue could happen with any other non-web system, turning into any other kind of vulnerability. This isn't a web problem, it's a system security problem. Bad inputs in a "native" app could lead to security issues in the output of apps on other devices. Badly implemented binary data decoders in a desktop application could do far worse than a XSS in the browser.
This problem is misattributed as a "web problem" because there are far more complete systems on the web than there are on nearly any other platform. It's like the tired argument that Mac is more secure than Windows, but Windows has historically had an overwhelmingly outsized market share, making OS X issues far less valuable to attackers.
> When I argue that the web is unsafe by design, it's because cases like that aren't rare, they're common.
I don't disagree that these issues are common, but I disagree that the web is unsafe by design. The web is a platform. If everyone wrote their Python APIs without a framework, I can guarantee they would be littered with security holes. If everyone wrote their own text renderer in C++, just displaying strings on the screen would be a dangerous task.
There are good tools that make it really hard to fuck up on the web. Seriously, try to accidentally have a XSS vulnerability in an isorendered React app with Apollo. The problem is folks that want to jQuery-jockey their way across the finish line and don't understand that they are making terrible mistakes.
How many developers do you think might have written a web server in their time, or will do in the next 10 years? And how many know will pass URL components straight through to glibc for resolution, as is the obvious way to do it, and create an exploitable SSRF vuln on their network? How many developers will have even heard of this type of problem?
New ways to exploit weird edge cases and obscure frameworks crop up constantly - it is a full time job even to keep up with it all. At some point you can't blame people walking through a minefield because they keep getting blown up. The problem is the mines.
this issue could happen with any other non-web system, turning into any other kind of vulnerability. This isn't a web problem, it's a system security problem.
That's just not the case, sorry. Have you ever actually written desktop apps that use binary protocols? It's a web problem:
• It relies on the over-complex and loose parsing rules for URLs
• It relies on unexpected behaviour in one of the most popular web libraries
• It relies on bizarre and unexpected behaviour in XmlHttpRequests
• It relies on the fact that web apps routinely import code from third party servers to run in their own security context.
I have been programming for 25 years and I have never seen an exploit like that before in managed desktop apps using binary protocols to a backend.
Seriously, try to accidentally have a XSS vulnerability in an isorendered React app with Apollo.
An isorendered React app with Apollo? I think that may be the most web thing I've heard all week ;)
I think I'll take the bet:
That article shows the patterns I cover in my article:
• Buffers can get terminated early, even in a theoretically "XSS-proof" framework.
• JSON can get interpreted as code
• Even experienced web developers can't get it right
If you've never written a desktop app before, I'd suggest grabbing IntelliJ or NetBeans and trying it out. TornadoFX is a good framework to try.
How so? If you allow the user to send arbitrary data, and your handling of that data is where the problem lies, it isn't going to matter whether the client sends a length-prefixed piece of data. You still have to sanitize that data.
HTML, and whether it uses closing tags or not, is pretty much irrelevant to the way injection attacks work, as far as I can tell. Maybe I'm missing something...do you have an example or a reference to how this could solve injection attacks?
It would be interesting to see if this idea could work in practice.
I guess it would have to be protobufs over TLS, and abuse port 443, to get through firewalls from hell.
I feel like this is conflating two different problems and potential solutions.
I'm not saying injection attacks aren't real. I'm saying that whether HTML uses closing tags or not is orthogonal to the solution. But, again, maybe I'm missing something obvious here. I just don't see how what you're suggesting can be done without types and I don't see how types require prefixing data size in order to work.
No it wouldn't. It wouldn't fix sql injection and it also wouldn't fix the path bug the op linked.
The problem is not length, it is context unaware strings. The problem is our obsession with primitive types that pervade our codebases.
Injection in general is simply a trust problem. If you can trust all inputs fully (hint: you can't, because nobody can), then you will never have an injection attack.
If you are exposing code to an untrusted, hostile environment (which is pretty much the web), no language that does anything useful will protect you against not caring about security.
Even if you absolutely need to inject a string in a sql query, sanitizing it is trivial. In .net / MS SQL, a simple x = x.Replace("'","''") does the trick. For any other common data type, strong typing should be sufficient to prevent any injection.
Obviously nobody is going to be typing length prefixes manually, so our tools are going to do it for us.
Now we're back where we started where you accidentally inline user content as HTML, except now HTML has the added cruft of someone's HN comment solution.
But like you I'm not totally convinced. I think this idea would make it easier for people trying to do the right thing to get it right; but for the blissfully ignorant? Might not help at all. Either way it needs a more flushed out spec.
From the XKCD:
Robert'); DROP TABLE Students; --
SQL solves this already with parameterized queries, and many HTML libraries also solve this in various ways, but if it were instead:
format("SOME_FN(%d:%s)", len(user_name), user_name)
Length prefixes are one way of working this, but only scratch the surface of the issue. As others have pointed out, it's also the fact that the control elements are inline with the data.
Oh thank God. I'm going to forward this to my wife.
Ha ha. I'll get my coat.
Or were senders always going to send true values for length and data?
Really, you can't trust any sender, so the data should be validated anyway.
There's been known attacks where a sender says here's 400 bytes and the receiver stupidly trusted that length specifier, and the sender's sends more (or less) crafted bytes and BOOM!
Known good data start and end specifiers, which HTML has, seems a good answer when dealing with untrusted senders (read:everyone)
Yeah, this is why everybody clicks on the comments link first.
I'm not intending to dismiss him outright; he may have an interesting follow-up. I guess I'm just much more optimistic about the web than he seems to be, and more critical of everything that's come before than he seems to be. I think Mike is about the same age as me, and probably has a similarly long history in tech, so I can't really pull the "hard-earned wisdom and experience" card in this conversation. I think I just disagree with him on this, and that's not a big deal.
One of us might be right. (But, I think betting against the web is crazy.)
If the article is right that it is close to impossible to hire a Web developer that understands all Web security issues and knows to mitigate them, it does not come as a surprise that there is fierce criticism to the article. It basically says you are doing a hopeless job and your employers' business model is flawed.
I'm not a Web developer, but I find the article very convincing. From what I follow headlines Web programming changes very quickly and the frameworks change all the time. Meaning that smart people are not happy with what is available, writing new stuff. Yet I don't think security has been the primary driver for any new framework. They are still parsing text. So let's see whether the author has any fundamentally different approach in his next post (if anybody remembers to read it)
Disclaimer: I work in embedded and our company advertises to be very secure. I know that our security sucks.
I have myself developed GUI application using author's beloved C++ and Qt and I can admit its a far better designed and convenient experience compared to the web, but it's hardly possible to achieve the same amount of flexibility in UI/UX design that is available on the Web. I think the fact that things are changing so fast, standards are badly designed (at least initially) and there are so many inconsistencies are all only because web is a fast moving platform that requires the consensus of many players to happen and move forward. Also the amount of commercial interest and developers working on the web is incomparable to other platforms, hence the fast moving nature.
If you take advantage of that flexibility to create a UX that's very different from the standard widgets, it's likely to be inaccessible to blind users with screen readers. Check out this rant on HN from a blind friend of mine (a few paragraphs in for the part that's most relevant to this thread):
As far as I know, the most accessible cross-platform UI toolkit for the desktop is SWT. It uses native widgets for most things, and actually gasp implements the host platforms' accessibility APIs for the custom widgets. But, I can hear it now, somebody will say they hate SWT-based applications because they wreak of Windows 95. Oh well, fashion trumps all, I guess.
But even Google knew not to depend on the universality of web apps on mobile - they have native apps for both Android and iOS. Aren’t we already at a tipping point where most web access is done on mobile devices?
Edit: Ok, maybe I could have predicted that lines like "HTML 5 is a plague on our industry" would ruffle some feathers. I guess I like a little snark in my criticism.
Fwiw, long live the web. It's imperfect, but it's open. I'll take chaotic freedom to tight control any day.
FWIW, I'd take tight control if it was in pursuit of humanitarian values, such as accessibility for people with disabilities, rather than a company's bottom line. The chaotic freedom of the Web isn't very good for accessibility. Yes, yes, accessibility is possible, but in practice, very often it doesn't happen. See this rant on HN from a blind friend of mine (yes, the same one I posted elsewhere on the thread, but it drives the point of this comment home):
In some ways we've traded speed for productivity.
This tweet is an interesting visual that makes the same point: https://twitter.com/TheoVanGrind/status/888850519564984322
Let's not forget we've drastically increased security by writing applications in safer languages.
Oh, and newer applications tend to support a far wider variety of devices types, displays, inputs, etc.
Developers definitely be investing a lot more effort into improving the status-quo, but it's unfair to claim stuff is slower without improvements.
I claimed no such thing. You're arguing against a statement I never made. Isn't that what's called a straw man argument?
I can't comment on most of the Office suite, but Excel evolved quite a bit since 95. Tables, PowerBI, Apps for Office, etc... If your needs are basic enough then even VisiCalc will do the job, but new features do make an impact for more demanding users.
So, the reasoning is that UI is fundamentally the same (or worse if not done right) to native UI from the 90's, yet it hasn't had a massive speed increase which seems wasteful.
But modern UI in Office is only an evolution of what was there in the 90s and hasn't changed fundamentally either yet it doesn't feel any faster.
UI is only a small part of an app, a well designed app will have most of the work performed outside of the UI thread and it shouldn't feel any slower than a native implementation. My thoughts are rendering speed isn't the issue but application design.
Sure, and Office in the 90s didn't feel any faster than the word processing I was doing on an Apple II+ in middle school. This is because the people buying (and building) software care about other things than processor efficiency. If it's generally fast enough for their normal use, they won't switch to a competitor.
The notion of "wasteful" here is in terms of something like RAM usage or processor instructions. But the correct measure is user time, including the number of user hours of labor needed to buy the device. The original Apple II cost 564 hours of minimum wage labor, and you were up over 1000 hours if you wanted a floppy drive and a decent amount of RAM. Today, a low-end netbook costs 28 hours of minimum wage labor.
Suppose you managed to put on that netbook something with the efficiency of Apple Writer or Office 4.0. Would anything be better? No, because the spare cycles and RAM would go unused. They would be just as wasted. No significant number of user hours would be saved. Or, alternatively, the in-theory cheaper computer they could buy would save them very few working hours.
As long as the user experience is as good, then the hardware notion of "wasteful" is a theoretical, aesthetic value, not a practical one.
You are also ignoring the notion that a user may want to run a variety of apps, and not want to close or have any of the lot swapped out and pretending the hit on performance, resources, and battery life isn't cumulative.
A user can run a few things even on the low-end netbook. Tabs are cheap. And if they hit the limits of their machine, they can either pay in a reasonable number of user-minutes to actively manage resources or a modest number labor-hours to get something beefier.
I personally would like to see things better optimized. After all, I started programming on a computer with 4K of RAM. But I recognize that there is very little economic incentive to do so.
Isn't this backwards?
Try doing the math here. How much cheaper would a netbook get if every single developer coordinated to reduce RAM and CPU usage? $5? Maybe $10? Looking at market prices, old RAM and CPUs are cheap. They consume basically the same physical resources as new RAM and CPUs, so price competition for not-the-best hardware is fierce.
Now ask those people if they'd pay $5 or $10 more for assorted new software features. Any features they can think of. And keep in mind that in that price range, people are paying $10 more to pick the color of their computer.
So sure, it offends me a little, because I like optimizing the things I pay attention to, like RAM usage. But if instead I optimize for the sorts of the things users care about, especially as reflected by what they'll actually pay for it becomes pretty clear: users don't care about the things I do.
So then the moral question becomes for me: who am I to impose my aesthetic choices on the people I'm trying to serve?
This is especially true as people are promoting everyone moving to a platform that is substantially worse.
How about getting more performance and battery life out of the same machine which effects more than netbook users.
You may have noticed that we are in the technology industry. That means the final measure of our work is economic. The final judges of our work are our customers.
I wouldn't mind a true low-power laptop which only needed a charge twice a month.
What you propose is interesting though none the less. What is the most battery life that can reasonably be packed into a device that is modest but still useful.
Evolution of a UI isn't as important as evolution of the features the UI exposes. As for whether it feels any faster, depends on what you're doing. To give an example, Excel functions can be calculated using multiple CPU cores, which AFAIK wasn't a feature of Excel in the 1990s. You'll only see that speed up if you've working with a large enough volume of formulas. Measuring speed by UI speed alone doesn't get you very far.
All that being said, you won't find me disagreeing with the fact that desktop apps are bloated (web apps even more so). I've experienced responsive desktop apps running on a 7.14MHz CPU. The fact that we've thrown away most of the hardware improvements since the 1980s should be clear to anyone paying attention.
And my point is that web apps have a lot of features that didn't exist back then, and because of feature additions Office and other native applications don't exactly feel snappy either.
That was the general point, but I was responding to a side comment that I disagreed with.
> "because of feature additions"
Adding features does not require slowing an application down. The reason modern apps (desktop and web) are slow is to do with inefficient use of computing resources, which has very little to do with available features.
> UI is only a small part of an app, a well designed app will have most of the work performed outside of the UI thread and it shouldn't feel any slower than a native implementation. My thoughts are rendering speed isn't the issue but application design.
at the start. :) So, we're in agreement.
Or, how much speedup would you estimate, if we convert all GoogleDocs functionalities into Word97? I'd estimate 1000 times. :) Or perhaps, the computation power for drawing a cursor alone will far exceed the whole Word97.
Yes, you have webworkers for multi threaded development. They're basically independent applications which run on different threads and you pass messages (which are simply objects) between them. The browsers themselves are also moving their layout and rendering engines to be multithreaded.
A well designed app would do very little on the UI thread and would pass messages between the UI thread and the webworkers, it would also spin up webworkers on demand to offload work. It's not as easy as some environments to develop in, but it's also fairly straight forward once you make the effort to do it.
If I was designing react for instance I'd have all the virtual dom / diffing stuff being handled by a webworker and then would only pass the updates through to the UI when computation is completed.
> Or, how much speedup would you estimate, if we convert all GoogleDocs functionalities into Word97? I'd estimate 1000 times. :) Or perhaps, the computation power for drawing a cursor alone will far exceed the whole Word97.
Whatever the speedup would be the speedup the users would likely not notice or will only notice a slight improvement.
And yes, drawing the cursor as a 1px wide div is computationally intensive, I guess you're referring to that article posted on HN awhile back that VS Code used 13% of the CPU just to render the cursor? :) Doing stuff outside of content editable is not ideal for writing applications as you lose a lot of system settings (like keyboard mappings, cursor blink speed, etc) that the browser automatically translates to the built in cursor.
Yes I'm actually referring to this. The programming model. Workers are great if you can divide and conquer the problem and offload (exactly what you have mentioned). But the messaging payload would be high under some circumstances when you have to repetitively copy duplicate a lot of data to start a worker. I don't have hands-on experience with web workers but I think it is unlikely to solve the messaging overhead without introducing channels/threads. Workers are more like processes. And currently they don't have Copy-On-Write. Of course we may see improvements over time, but this is to gradually reinvent all the possible wheels from an operating system, in order to be as performant as an OS.
> A well designed app would do very little on the UI thread
I partially agree. It may do little, but in turn, the consequence may be huge. This is because DOM is not a zero-cost abstraction of a UI. It does not understand what the programmer really want to do if, say, he/she constantly ramping the transparency of a 1px div. Too much happens before the cursor blink is reflected onto a framebuffer, compared to a "native" GUI program. I think it will be very helpful if the DOM can be declarative as in XAML, where you can really say <ComplexGUIElement ... /> without translating them eventually into barebone bits. Developers are paying too much (the consequence) to customize this representation.
> Whatever the speedup would be the speedup the users would likely not notice or will only notice a slight improvement.
There won't be a faster-than-light word processor but I really want it to:
1. Start immediately (say 10ms instead of 1000ms) when I call it up
2. Response immediately when I interact (say 1ms instead of 100ms)
3. Reduce visual distractions until we get full 120fps. Don't do animations if we don't have 120fps.
4. If the above requirements can always be satisfied by upgrading to a better computer.
Sorry, but this is absolutely untrue. The Ribbon UI introduced in Office 2007 was a massive change functionally and visually. You went from a static toolbar that would just show and hide buttons to live categories which not only resize but change their options and layout as you customize or resize the window. There's now drop downs, input fields built in, live previews in the document as you hover over tools and options, and more.
Same for the new Backstage UI introduced in Office 2013 for saving files, viewing recents, and other file and option operations. You have full screen animations and interactions.
Hell, Microsoft even made the text cursor fade in and out instead of blinking, which needs more processing power.
Could Microsoft have optimized it more? Yes. But they definitely have added tons to it since the 90s and even mid-00's to justify why it's slower.
All these points are no different to how web tech is evolving UI so should be discounted the same way that web technology is.
There are lots of things they could do. Linking data between spreadsheets or between excel and powerpoint sucks (a significant part of the user base needs to prepare decks and reports that contain lots of charts and numeric tables).
They could learn from Apple's approach with numbers where a worksheet is a canvas on which you can place multiple tables or charts or diagrams, which makes a lot more sense than the single grid per worksheet approach (think having to display two tables one above the other, you are forced to align columns of different widths, and how does the top table overflow?).
Users who need to script or create UDF are stuck with a VB6 editor that hasn't seen any update in 20y and an antiquated language.
I could continue the list for a while. These are basic core features. There might be 1000 people in the world who use power BI, and only because their IT dept set it up for them. But millions of users who's life would be made easier with the suggestions I made above.
You can do this with Excel also. When was the last time you used Excel?
> "There might be 1000 people in the world who use power BI, and only because their IT dept set it up for them."
The Power BI features in Excel come ready to use out of the box. Clearly you've never used them, but they're by far the best new features in modern Excel. Any power user of Excel that isn't exploring them is missing out.
How do you do that then?
Tables may be fine in Excel for data but useless for any custom logic, which is what I use the most excel for. I am not aware that tables overflow with a scrollbar like Apple's approach allows. If you need to add more rows to the top table, the bottom table goes off screen. If the top table contains a very wide column, the bottom table needs to have the same column width. These are all inconveniences that apple's approach solves (and wouldn't be very hard to implement in excel while preserving backward compatibility). I don't see how Excel tables solve any of that.
Believe what you want.
> "I am not aware that tables overflow with a scrollbar like Apple's approach allows."
If scrollbars matter to you then you can use Power View, which is one of the Power BI features available in Excel. To get a better idea of how it works, take a look at this short video:
Numerous similar apps depending on what online platform you prefer.
VisiCalc is the first spreadsheet program:
The point I'm making by bringing up VisiCalc is, if your needs are basic enough, any spreadsheet program will do the job, even the first one. You'll only understand why the more modern desktop spreadsheet programs are more advanced if you have a reason to use the newer features.
This is what gets lots on most people.
The power users create some "nifty" spreadsheet that runs some "important" piece of a business. That "nifty" spreadsheet now requires Microsoft Excel and forces everybody in the company to have a copy if they want access to it.
I don't see how this is an argument in favor of the web. If anything, it re-enforces the accusation TFA made against it even more.
If "The 90s were slowly reinventing UNIX" then why would be recreating the 90s today a good thing?
If the 90s "slowly reinvented UNIX", then the correct thing to do would be for the web today to either be a fully modern 2017-worthy technology, or at least take its starting point from where the 90s ENDED, not re-invent the 90s.
Since when has an inexperienced mob of people ever done the correct thing on the first try?
And, yet, the mob has continued the very fine legacy of those 90s (and 80s and 70s) software developers in pushing software into more places it's never been before. Somehow, it's working, despite the relative ignorance and stupidity of the average developer (myself included) in their understanding of history.
I think I'm being misinterpreted as saying the web is great because it has no flaws. Which is not my intention. The web has many ugly flaws. The web is great because of what it does despite those flaws. And, also, a lot of those flaws come down to inexperience, which we can't cure with technology. It seems likely it can only be cured by making the same dumb mistakes a few times until it becomes collective wisdom that it was a dumb mistake...the kind that gets beaten out of programmers very early during their learning process.
I guess I'm just more optimistic about the web-as-platform than most. I see all its flaws, I just don't think they should result in a death sentence.
But, if you show me something better, I'll gladly participate.
Now it's slow, burns your battery, it's full of ads/tracking and anti-patterns like infinite scroll or SPAs and view source is useless.
For me, a site like HN or amazon (with some reservations) is the pinnacle of what the web is able to offer.
Only web standards are not created by an "inexperienced mob of people" but by large multinationals, multiple CS PhDs, and seasoned developers.
And if we consider every generation of new developers an "inexperienced mob of people", then we have absolutely no claim to ever being called an industry and engineers.
>And, yet, the mob has continued the very fine legacy of those 90s (and 80s and 70s) software developers in pushing software into more places it's never been before. Somehow, it's working
Working in what? Mobile apps, counting in the millions, have actually "pushed software into more places it's never been before", and most of those are usually native, or done with non-web technologies (of course web stacks encroach there too). For most people, those mobile apps on their smartphones is how they interact most of the time with the internet, not www, even if they have a laptop at home or at work. For younger people even more so.
>But, if you show me something better, I'll gladly participate.
Better things come from people feeling the need to create them. They don't appear on their own, and people migrate to them. Else people can be stuck with the same BS for decades, centuries or millennia (consider dynasties ruling for centuries before the people of some country attempt to bring them down in favor of democracy).
It is likely to run on over a billion devices, and no installation required. Can a non webapp or native app be better than this?
Complex things are often complex because the work that we do as humans is, well, complicated.
A journey map painstakingly built by an epic designer and smart person at large may design the ultimate document template that addresses every need that you are aware of. Then I come along and want something else.
When the answer is that everything is wrong, the question is usually wrong.
As for the work Google Docs do, come on, they're a glorified Markdown editor, they lose in any kind of comparison with Windows 95-era Word.
The technology to do RTC is not particularly resource intensive on the client side. Nor is it web specific: the native Android versions of Google Docs don't use the web but they do support RTC.
RTC is enabled by an algorithm called "operational transform". It's a very clever algorithm that is rather tricky to implement properly, but it doesn't involve loading huge datasets or solving vast numbers of equations. It's ultimately still just about manipulating text. You could have implemented the client side part of it on Windows 95 without trouble, I'd think. At least I can't see any obvious problems with doing so, assuming a decent Windows 95 machine like one with 8 or 16mb of RAM.
OT does, however, require the entire app to be built around the concept. You can't easily retrofit it to an existing editor.
The reason Word 95 didn't have Docs style realtime editing is simply because back then networks were kind of rare, slow, crappy and word processor designers didn't know about the OT algorithm back then because it was still being researched by academia.
The real question is - if we had a better client side platform on laptops and desktops, one that supported some of the best features of the web without the rest, would Docs RTC still be possible? Surely yes!
LaunchPlan2017Q4Final4Draft1Beta.doc with Track Changes on.
There are severe shortcomings in all platforms that have aged. Why does power management in Linux suck so hard? Why can't we have networked filesystems by default (NFS is quite bad btw)? Until somewhat recently (~7 years), audio on Linux was a disaster: "Linux was never designed to do low-latency audio, or even handle multiple audio streams (anyone remember upmixing in PulseAudio?)". What the hell are UNIX sockets? Is there no modern way for desktop applications to talk to each other? (DBus was recently merged into the kernel). Why doesn't it have a native display engine? (X11?)
Today, it's more fashionable to criticize the web, since majority of the industry programmers endure it. Sure, there are some "simple" things that are just "not possible" with the web (everyone's pet peeve: centering). Yes, you lose functionality of a desktop application, but that's the whole point of a new platform: make what people really need easy, at the cost of other functionality. For an example, see how Emacs has been turned into a web app, in the form of Atom? You don't have to write hundreds of lines of arcane elisp, but you also don't get many features. Atom is a distillation of editor features that people really want.
I don't understand the criticism of transpiling everything to Js; you do, after all, compile all desktop applications to x86 assembly anyway. x86 assembly is another awful standard: it has evolved into ugliness (ARM offers some hope). Every platform was designed to start out with, and evolved into ugliness as it aged. We already have a rethink of part of the system: wasm looks quite promising, and you'll soon be able to write your Idris to run in a web browser.
Once upon a time, this was a solved problem.
The author is using "buffer" in a different sense than you are. You're thinking of a malloc'd buffer. The author is using "buffer" more abstractly, to refer to a data segment, such as a JSON or HTML string, or a string of encoded form data. His point is that that latter type of "buffer" has no declared length, and needs to be parsed in order to determine where it ends, and that as a result it is subject to problems that one can term "buffer overrun" by analogy with the traditional C scenario in which one obtains a pointer to some memory that you should not have access to.
You misunderstood the author's point. Things like SQL injection are really equivalent to buffer overflow attacks -- data creeping into the code because of poor bounds checking.
The fix for SQL injection is to work with binary APIs and protocols more. Parameterised queries are the smallest step to that world, where the user-supplied data rides alongside the query itself in separated length-checked buffers (well, assuming you're not writing buggy C - let's presume modern bounds checking languages here). They aren't combined back into text, instead the database engine itself knows how to combine them when it converts the SQL to its own internal binary in-memory representation, as IR objects.
Another fix is to move entirely to the world of type safe, bounds checked APIs via an ORM. But then you pay the cost of the impedance mismatch between the object and relational realms, which isn't great. I will provide a solution for this in part II.
Most if not all webapp security problems come from an attack of servers, not clients...
It's just one of these assertions that throw a dark shadow on the whole article. But "Flux is Windows 1.0" is my favorite.
Many programs in the 90s, especially of the simple CRUD type, were written in VisualBasic and other RAD tools, as they were known at the time, and later Java.
> Is this really a common problem in web apps? Most web apps are built in languages that don't have buffer overrun problems.
It's not buffer overrun in the "undefined behavior" sense, but rather problems relating to the need to parse text data, which can be tricky and susceptible to injection attacks.
And, we complained endlessly about how slow and bloated those programs were. So it goes.
Java apps were on the other hand slow. Ironically, today we have so many languages producing slow code, that Java is considered fast.
Seriously, the reactive frameworks (any really: React/VueJS/Preact/...) used in tandem with a separate state container (Redux, Vuex...) is a much better "thought out" approach to application programming than anything in the Cocoa/Swift world.
Back then the compilers sucked. They would take complete crap of code and still it would work. They were like browsers are today. (from my experience from going through one old MUD code)
Today the song is different. Not only will the compilers warn you of many things, there's even tools for static analysis (and dynamic). So the argument that C (and even the more complex C++) is inherently insecure holds much less weight (just go run old code through a static analyzer, or a normal compiler for that matter).
That said there's only one way to write a "secure program", and that is formal verification.
People that talk with a serious tone should back up their claims, at least that's my opinion.
Static analysis helps, but it can't catch everything. I work on a modern C++ codebase, and we still face all of these issues.
Things that are written in C these days are usually written in C for performance reasons. FFMPEG would not have even close to the performance it has if it was written in a memory safe language instead of C and assembly. I doubt that a magical compiler (and/or language) will appear in my lifetime that can compile high level code into performant machine code, especially when it comes to memory management. (note that C also has advantages other then performance)
JS doesn't even have a proper specification, let alone a bug-free interpreter/compiler.
EDIT: AFAIK verifying memory access is part of a formal verification, where memory is also modeled mathematically.
So is C the problem, or is it modern CPU architecture? C has stuck around for so long because of how close it is to assembly language. There will always be a need for a language that is one layer above assembly, and currently assembly is incredibly hard to secure.
C is close to PDP-11 and 8/16 bit computer Assembly, it has hardly any direct mapping to modern CPUs.
It is possible, in theory, to write a secure C/C++ application, however it is not even possible in theory(!) to write a secure web application.
You know that most today OS are written in C or C++ ?
Also many higher level languages are it self written in C or C++?
Write secure applications is hard and need a lot of discipline and knowledge that most developers simple do not have.
Better tools can and need to help here as well as better languages. But it is still possible to write pretty secure and efficient software in modern C++. Yes it is not easy but possible.
What are you basing this on? You can't put Ada, Erlang, Haskell, FORTRAN, etc in the same bucket as C or C++.
And yet, we found good ways to eliminate the most common sources of these problems by using new languages. The web, on the other hand, is an amalgamate of several different technologies and creating a new language won't make it more secure.
You might argue either way, but a straightforward C program can be correct if it is well formulated, but a straightforward web app can not be correct unless it is fully mitigated.
>C is not impossible to secure
Expert compiler writers and computer scientists disagree with this assertion. History seems to be on their side.
Writing "secure" C requires meticulous attention to detail at every level, intimate knowledge of undefined behavior _and_ of compiler optimization, along with the exact options passed to the compiler. It requires comprehensive reasoning about signed integer behavior and massive amounts of boilerplate to check for potential overflow. It also requires extensive data-flow analysis to prove the provenance of all values (as Heartbleed taught us) because a single mistake in calculating a length leads to memory corruption.
To put it another way: No one can write fully secure C code. It has never been done to date. All non-trivial programs written in C contain exploitable security vulnerabilities. The combinatorial explosion of complexity makes it impossible both to formally verify and to permit human reasoning about the global behavior for all likely inputs, let alone unlikely ones.
I'd say OpenSSH (since SSH2) has a better track record than most webapps, as unfair a comparison as that is. In terms of local robustness, there's SeL4, which is also a bit unfair (since it took about a decade for a team of geniuses to prove enough properties to make it probably not very buggy).
I don't disagree with your use of OpenSSH as an example.
Instead of thinking of it as buffers, you just have to encode/decode for the proper environment. Such repetitive stuff is easily implemented in stack layers.
Memory-unsafe programs on the desktop should go the same way as the HTML layout model.
Check out Yoga . It's a small layout engine based on flexbox and the CSS box moel. It doesn't cover all use-cases, but it's pretty powerful for its size. I
It's important to remember that CSS and the DOM was initially created and developed with certain kinds of documents in mind. Both are certainly quirky and missing a lot of features, but I wouldn't say they're as bad as many people make it out to be. Based on my experience with native desktop toolkits, they're all quirky in one way or another. One of the biggest issues with modern CSS is that it doesn't have sensible defaults for web apps.
Could you provide an example of your preferred approach to handling layout and styles, and talk a bit about what why you consider it superior?
What key features do you consider missing from CSS and the web?
Also css lacks properties for controlling wrapping limits and non-linear image scaling. And for some reason I always have to optimize on either width or height, I can't control both perfectly.
I'm unclear on what you mean by wrapping limits and non-linear image scaling. Could you provide an example of what you'd like to achieve?
As for having to optimize for width or height. Have you looked into display: grid? I believe it may help enable the kind of layout you're interesting in achieving.
Of course, there are books and guides to help people, but how would someone figure out which guides are worth it? There are a lot of highly rated books on the topic of web development and if you don't already know what you need, it can be daunting.
But yes, flexbox is great.
The same idea may be applied to an operating system's ability to allow a user to operate on their machine.
Edit: It would be useful to consider why the need for a universal interface to the internet was originally sought out.
I think, here, we might looking at it with the wrong lens. I'm unable to find the right words to say this. Let me say this statement feels ungrateful. Web is the largest and fastest growing ecosystem of software we've right now (refer: community size, number of projects on github, say, in Javascipt, CSS, and other web technologies).
You're comparing what is what should've been. By that measure, any human activity will fall short of not only yours but anybody's expectations.
You'd think that getting sane layout control is easy, but apparently it's not. Getting a lot of humans to agree on a fast growing technology is hard (it seems like).
PS: I'm not saying "nothing could've been better, be happy with what you have", not at all. I'm just saying this seems like complaining and a better approach is to try and make it better
I needn't have written up a long tirade for such a simple statement. I see that this is the same sentiment that's espoused by several others in this thread, and thought I'd try and provide a different perspective to look at this with
Flexbox is essentially an import of those concepts to CSS. There are no new ideas there.
But now flip it around and try to make a beautiful, responsive document in Swing or GTK. The layout managers that make them so great for laying out UIs won't help you much there. They can do it, they have layout managers that operate somewhat like a CSS box flow, but it won't be as natural or as easy.
So it's worth considering if it's easier to evolve HTML towards sane layout management for app-like things, or GUI toolkits towards sane layout management for document-like things.
Microsoft stuff was going for fixed screen size/resolution, fixed layout, and using a quite limited set of controls.
Web browsers try to be accommodating by default - any screen size (including mobile), zoom built in, and significantly more powerful control primitives that allow enormous flexibility in the way to design things.
If you're building forms applications that only need to work on a PC, the old way was certainly easier, and in fact, Microsoft has WebForms (regular ASP.NET - not MVC or API) that is pretty similar (and doesn't horribly break down so long as you color within the lines, so to speak).
Try to imagine your vb6 app being able to scale down to window the size of a phone screen, and how the WYSIWYG editor for that works even work - I imagine it would be fair to describe it as "hot garbage" also.
Web works across everything with little to no extra effort, whereas native app built with WYSIWYG UI builder is going to be constrained to certain hardware and take extra effort for handling display variations.
You can certainly handle different window sizes with traditional UI layout managers. The only thing they don't do much of is totally changing the entire UI layout based on window size, and that's only because it's so rare to have a single app that's actually identical between tiny and huge screens.
I say this because it took me zero effort to use due to how intuitive it was to get started...
MS tools are still here for those of us doing native Windows development.
Also the Apple and Google GUI tooling for their mobile OSes are quite good.
Much of the complexity of web design is not in the tools; it's in the fact that users don't expect a standard whatsoever, they just expect their UIs to be as slick and customly designed as magazines. If every website was written using the same standard, predefined set of widgets and components, the complexity would disappear.
Amazon is also king of providing value in their markets, and their markets are also apathetic toward flashy design. I don't need animations when I am provisioning an AWS instance nor when I am buying goods at the lowest possible price I can find.
However if I were not me, and I were shopping for luxury or boutique goods, a site that looked like Amazon would not instill me with confidence.
My point is just that the web has diverse design and UX needs and the current toolset caters for that. If someone managed to build a platform with those benefits and more and the webs market penetration then I would be on board.
I will argue still that it is necessary if you want an alternative to the web, as the alternative has to be a better value proposition for the end user not the developer.
But it will need to be the same experience that the client asked for. Coke is never going to ask you for a react website with webpack tooling and a lambda backend, they are going to come to you with some grand vision of an application that their marketing team imagined in the shower months ago and has been workshopped into a mess. You may or may not be able to deliver that with simple HTML and CSS.
I am also keen for the web to move toward some kind of stability in technology as well, the churn and wheel reinvention factory that we currently have is creating a bit of a mess but I don't think it's worth throwing the web away just yet.
Users expect every website to have a unique identity (unlike anything built with WinForms), that is what creates the complexity.
If you actually use something like bootstrap, your website will look unoriginal, but it will be dead easy to make.
yarn/typescript and (though some days I hate it..it has gotten better) webpack largely make it feel sane(r).
That said getting to a point where I was comfortable with all three was insanely more complex and time consuming that picking up Delphi 6 was in the early 2000's.
Shrugs, the beast is what it is until someone does something better.
This lack of support for anything other than hardcoded absolute layout is exactly what made it so simple and easy to use. It's the equivalent of doing document layout by padding with spaces - it works for simple cases, and it's very easy to teach people, but it's a mess for anything even remotely complicated.
That's largely a non-issue to me. If I need anything fancy, I'll draw it myself. The simple stuff ought to be simple.
> As a result, things break as soon as you try to make an easily resizable window
Au contraire! It is much easier to make a resizable window when you are in full control of how nested widgets are resized along with it. That being said, some automation is fine (e.g., how MFC resizes views in response to their parent frame being resized) as long as simplicity isn't lost in the process (I'm looking at you, CSS).
It's just that nobody wants to make a Win32 style app with absolute positioning on the Web. That's because responsive apps are superior to nonresizable, manually positioned UIs.
But it doesn't solve the problem with high DPI, changing fonts, and localized strings being sometimes significantly longer, requiring widgets to be resized to accommodate them.
When I click on some languages (I am not native English and my native language, Dutch, is not very high on the list of priorities for most companies) in some of the biggest companies in the world, you notice it just wasn't designed for that. From just making it wrap and enlarge to break the design to simple sticking outside the box.
For some localizations (Chinese for one) you will have to redesign anyway because 'our' (not sure how to describe) designs simply do not work/sell over there.
Most global companies have a local presence doing their local sites; I know some, even inside the EU, very big companies that have a site per country and have the html/css look 'the same-ish' for the user but completely different when you check the source to accomodate for local taste / language.
I like the dream of this working, as I am a programmer, but I don't see it in real life and I find html/css just painful to work with; not difficult but painful compared to most desktop GUI tech. Flexbox etc is changing that a bit but still it looks like people are shoehorning everything in this html5 stuff just because they desperately not want to use/learn other things instead of using the best tool for the job.
Disclaimer: I am old and have seen this before. I do create webapps and use React (new license makes it workable outside hobby projects), but I will gripe about it like the author of the blog post.
Think about user changing the default UI font. OS X and Windows both make it difficult to impossible, and for this exact reason. On Linux, though, it's common and expected (which is probably why all UI frameworks that target it do have some decent dynamic layout support).
But aside from font family, there's also the issue of font size. That one can be cranked up on high-DPI displays, or for accessibility purposes.
> I find html/css just painful to work with
Don't get me wrong, I'm certainly not praising HTML5 and CSS here. They're vastly overcomplicated for what they do, for app development. And layouts are a long solved problem in desktop UI frameworks - Qt, Tk, Swing, WPF are just a few examples. WPF in particular is a good example of an XML-based markup language specifically for UI, and it's light years ahead of HTML5 in terms of how easy it is to achieve common things, and how flexible things are overall.
If even half the time and energy invested into building "web apps" (including all the Electron-based stuff) went into an existing UI framework - let's say Qt and QML - we'd all be much better off; developers with far more convenient tools, and users with apps that look and feel native, work fast, and with smaller download sizes (because you aren't effectively shipping the whole damn browser with them).
This is why I had big hopes in XHTML and the XML components, but then we got HTML5 instead, yet another pile of hacks.
I find it fun to write Cocoa apps too, and I do on occasion for throwaway stuff that only I am going to use. But too many people (including me, at home!) simply don't use Macs. When I have to write a portable app, the choices basically come down to GTK+ (doesn't look native anywhere but GNOME on Linux), Qt (requires C++ plus moc and doesn't always look native either, for example on GNOME), or writing everything from scratch for every platform. While the last choice may be the "right" one from a purist's point of view, the extreme amount of work necessary to make duplicate Windows/Mac/Linux (often plus Android and iOS) versions makes it all but out of reach for anyone but big companies.
With the switch to Win32, the tools became VB, Delphi, Smalltalk and Visual C++ with MFC.
Like every Windows developer I also own the Petzold book, bought for Window 3.0 development, and other good one from Sybex, probalby the one book that ever explained how to properly use STRICT and Message Crackers introduced with WIndows 3.1 SDK.
However I might have written about five applications in pure Win32 API instead of using one of the former language/frameworks, as requirement for university projects.
In general, I think many developers only have the bare bones native experience without making use of proper RAD tooling, or the UNIX way, which has always been pretty bad in tooling for native GUIs versus Mac and Windows or even OS/2.
It's what I imagine a reasonable HTML/CSS would look like.
At which point you can basically throw the designer away, since you'll be writing code to manage layout for all widgets anyway.
My day job is to implement a commercial ERP system that has never been and probably will never be localized.
All software I use on a daily basis is English-only, even when localized versions to my native language exist, because:
(0) The translations are absolutely horrible. Who in their right mind would think that they are actually “helpful”?
(1) Even if the translations weren't horrible, the extra complexity simply isn't worth it. (Admittedly, my tolerance for system complexity is rather low compared to most other users.)
So, from my point of view, when you talk about localization, you might as well introduce yourself as a visitor from a parallel universe (where localization is presumably useful).
Go download NetBeans and create a Swing UI in Matisse. You'll find these issues aren't an issue. You can drag/drop and end up with a flexible, responsive layout that can handle things like strings changing length due to localisation. You can do the same with Scene Builder for JavaFX, although it's not as slick as Matisse. Or even Glade, if you're more a UNIX person. The latter two tools require you to understand box packing but allow for a relatively responsive layout.
The thing they don't do is let you totally change the layout depending on window size. But that's a fairly easy trick to pull off by just swapping out between different UI designs at runtime. There are widgets that can do this for you.
But yes, these days, people do expect windows to be always resizable and that does add some complexity.
Data binding is better in that regard, but once you start doing complicated nested data bindings, it's rather tedious to do it in the designer (because you can't just bind to "A.B.C" - you have to set up a hierarchy of data sources).
Worse yet, you start hitting obscure bugs in the frameworks. Here's an example that I ran into in a real-world production WinForms app ages ago (side note: I wasn't an MSFT employee back then, so this was an external bug report): https://connect.microsoft.com/VisualStudio/feedback/details/...
Having said all that, the aforementioned app was written entirely in WinForms, using designer for all dialogs (of which it had several dozen - we used embedded controls heavily as well), with dynamic layouts and data binding throughout. And it did ship successfully. So it wasn't all that bad. Still, not the kind of experience I'd want to repeat, when I can have WPF and hand-written XAML.
Exactly. At least 90% of the functionality of my forms-based applications use nothing more than the standard UI components Tk provided in the early '90s. Why the web of 2017 still cannot grasp this is unfathomable. To be perfectly honest, I've never seen any toolkit match the productivity of Tcl's Tk of more than two decades ago, and it's even better today:
Properiatery low-code tools built over the web are a better starting point.
Good lord I hate this buzzword bingo. How the hell are proprietary low-code tools better? In what world is that a sane response?
Meanwhile I've struggled to get things looking well with GTK+ or Tcl/tk. Especially when the UI I'm trying to make is dynamic. The tooling has never seemed very condusive to "fit content"-style UIs
That's where I still run into problems with CSS too. However, at some point, and not because I started using flexbox / grid, CSS did click for me and now it's mostly second nature to get the layout that I'm going for.
My feeling on this whole topic is that while as a web developer I have often thought "there must be a simpler way", every time I actually start to imagine what that would look like I end up re-imagining something similar to the web stack as it is now. There is a lot of inherent complexity to GUI-based networked client-server applications that need to be responsive, continuously integrated, database-backed, real-time, etc.
This is a very dangerous assumption. The interpreters you use have not been built with security in mind.
Go take a look at PHP changelogs for example.