Local operating systems, which this essay says is the gold standard of feature sets, are not standing still. If you think of how long it will take for browsers to reach current parity, go back that amount of time and see how far the OS has come.
But let's consider, for a moment, that browsers catch up. Let's just look at native development in general, since this has had access to all local OS features since before the web. Has that never changed? It's changing all the time. We're finding higher abstractions to work with. We're making it easier. I highly doubt Grid and Flexbox as they exist today will still be how web layout works in 20 years.
Google isn't even the first to scare the world. This essay could've been written in 2002...about Internet Explorer. Stop laughing. It held even more dominance than Chrome does today, and was closed source. Somehow we survived...which is good, because VRML sucked.
However, even with all that aside, people have been rolling their own browsers for years. People fork chromium today. The dark web exists. Maybe in 20 years lots of people will be streaming everything from a private cloud supercomputer vis-a-vis Stadia. The internet is no more homogeneous than all the people of the world. Everyone is finding new ways to access it all the time. This is why open standards are more important than browsers. It means anyone can build their own at any time based on the documentation and it can all read the same documents.
So I wouldn't worry about this coming to pass in quite the way you think. Though running your browser in a VM isn't the worst idea.
I've been reading a whitepaper from ~1997 on Microsoft's Distributed COM (DCOM) last night, and as I was reading it, I couldn't stop thinking: this is microservices. And it was really more than that - at least on the surface, this was microservices + orchestration + autoscaling + serverless, as well as, via Internet Explorer, component-based UIs with graceful degradation. That paper seems like a window to an alternate reality in which everything that's currently hot in tech has been done 20 years ago, except programming language-agnostic, using saner (i.e. binary) protocols, offline-first, and with advanced security built in from the start. I don't know why this is not our "real" reality, but I finally understand why MS was in a dominant position back then, and why IE was a hot thing.
EDIT: for the curious, the mentioned whitepaper is: https://www.softwaretoolbox.com/dcom/DCOM%20Technical%20Over....
Honestly this happens all the time. Not only do we seem to reinvent the wheel but we almost invariably reinvent it worse. Take 'dark mode' for instance. Everybody lauds that they can switch to a different theme that they feel is easier on their eyes, apparently oblivious to the fact that we could do that in Windows 95 with significantly greater and more granular control at the OS level. This industry is overrun with complexity fetishists who keep piling on more and more abstractions and calling it progress when they manage to catch up with the less-abstracted past.
There will always be a lag between the two, and it won’t just be ‘reinventing the wheel’.
I had a look at CORBA and it was horrible in many ways. Interestingly it’s something that works and is proven at large scale but nobody would want to have it for small/middle scale non enterprisy applications, and it might not even be a good idea at large scale depending on what properties are valued. In no way I’d see kubernetes with services as a worse version of it, we didn’t choose it just by ignorance or hype.
Same with “dark mode“ on win 95. Most programs couldn’t even properly handle text sizes, it was a i18n nightmare that was just brushed under the carpet. Changing text and background colors was the same, with some apps somewhat working because they Mostly relied on default widgets, and a mess of unusable apps on the side that just had text colors baked. You had a better chance to have it work by plain inverting all colors through the accessibility settings.
To be fair it’s still partly a mess now, but we are in a way better situation that in Win95 , it is understood at an application level, it actually makes sense now.
There's a deep conflict between the interests of users and software vendors, a conflict for control over presentation. It's blindingly visible on the web, and half the reason it's in such shitty state it is, but it also applies to desktop software. The problem is that for a vendor, application's UI and UX are marketing space, and they want total control over "user's journey". For the user, UI/UX are things that stand between them and the work they want to get done, so they want it to be effort- and bullshit-free. For non-tech-savvy users this implies "consistent with all other software they use"; for power users, this means "able to be altered or removed" - but both of those viewpoints require vendor to not have the final say about presentation.
In context of dark mode, an OS-level control over UI styling is a user-friendly feature. Unfortunately, it got defeated by vendors who want the UI to be their branding space, and now they have to begrudgingly add dark mode themselves.
EDIT: see also https://news.ycombinator.com/item?id=24918296 upthread, it seems to be a quite accurate description of what will happen if we let the vendor side win.
Most people don’t want “dark mode” as a pure accessibility feature in the sense of “whatever color scheme is fine as long as it’s dark everywhere”. That’s the reverse color feature in most OS.
Instead I/they want dark themes but with pleasing colors, adjusted depending on the screens and the widgets, having sane defaults for content that is usually supposed to be white, with exceptions.
Yes, that’s a tall order, it’s context dependent, and we might actually want to decide app by app if the dev did a good job and/or if dark mode makes sense, and sometimes change the theme in the app because it’s otherwise not great.
We end up in a patchwork of apps following dark mode and others not, with different themes applied that might not be triggered by the system.
For instance I use youtube, reddit and editing apps in dark mode only, while the rest of the system is in light mode. As you say it could be handled by the system in a super granular way, but it would add that much more complexity, and I actually chose the exact themes to be used in most of these apps, so centralization wouldn’t help anyway.
Yes, in terms of raw flexibility exposed to the user, customizable color schemes look great on the surface. They work to some degree, but fail utterly for a wide range of cases in practice. Not having them is progress after a fashion because it means giving up on an flawed idea. I will admit, though, that it is not easy to see why it is flawed when you haven't tried to design custom controls that have to convey more state than can be expressed using the standard OS colir palette.
I worked for a company whose software was/is based on CORBA and let me tell you, that thing really smells like microservices-before-it-was-cool.
You had a naming service, and each component would connect to the naming service on startup and resolve the endpoint for the components it needed.
And each piece of software had a component called ORB (object-request-broker) that was used to represent remote stuff (remote objects/endpoints). You could invoke method calls on this object (kinda/sorta, i'm making it simpler here) and it would forward the call in background, hiding the networking/serializing/deserializing details.
That whole things kinda reminds me of protocolbuffers, etcd and microservices. By using a cluster manager (Veritas Cluster Manager) you had a service groups, something that looked like a pod, in a way.
It's been a while, but I've started thinking that there's nothing really new. The problems we're facing are pretty much always the same, it's just that the solutions improve on different areas. Kubernetes for example helps a lot on the scaling problem and the standardization (both ops team and dev team have a common lingo, the kubernetes object model -- pods, deployments, services, requests&limits etc).
Just skimmed the article... Yup, DCOM was a competing alternative to CORBA.
If someone older than me would like to chime in and add some details, that would be lovely.
After fighting COBRA for a while KDE said this is too complex/slow and wrote DCOM (I might remember this name wrong) which gave them the parts they wanted without the overhead. GNOME stuck with COBRA longer, but I get the sense that they half agreed and were just hoping they could find a solution without abandoning COBRA (GNOME also had more corporate backers even in those days so there was probably non-programmer architects demanding they make it work). Evenually both agreed that they needed something more lightweight to solve the desktop needs and DBus came along.
Also, iirc, KDE's thing was named DCOP
If they had wanted interoperability, they could have implemented something more standards based as IBM did with SOM. But requiring Windows for all parts of distributed systems is very limiting.
Around 2000 or so, I was installing the Visual Studio -- Pre .NET Version -- on a number of new developer workstations. I was not a Helpdesk or Desktop tech, but worked supporting servers -- they sent me over to do the installs because invariably "something wouldn't launch". Different IDEs were used depending on what language you were writing in with this monster of an app that was used to write programs in Visual Basic. And the program was 5-10 times larger than the largest installed application and came with 2 CD-ROMs worth of documentation that needed to be installed locally. I remember I'd start all of the installs at the same time because they took several hours to complete.
 If it isn't obvious, this was a one-off job that I don't recall very much about. I didn't start using a Microsoft IDE until the first release of Visual Studio .NET (retroactively renamed "Visual Studio .NET 2002", I think).
 For the younger among us, that's about 1.5GB in a world where two years earlier 16GB of HDD was very expensive.
> saner (i.e. binary) protocols
.. which only had one canonical implementation on each end.
If I remember rightly, the protocol for accessing Exchange from Outlook was MAPI over either DCOM or DCE/RPC. Despite some reverse engineering efforts, this is why there were no Open Source drop-in replacements for Exchange, and clients accessing Exchange always did so over IMAP, thereby losing features.
Similarly SAMBA is DCE/RPC. https://wiki.samba.org/index.php/DCERPC
COM sadly has a number of unfortunate characteristics though: it's quite complex to learn and it's not very secure actually (de-serialization of complex types in C anyone?)
But I think it's a wonderful technology.
I was using encapsulated components with internal state and defined interfaces more than two decades ago and it was old technology even then.
The web is a funny place, they grab something and claim it's revolutionary but often forget to mention where it came from.
It's not a bad thing but I have often joked if you want to see what is considered state of the art in Year X go see what the academics where doing in (X-20) years.
Its nothing like COM, but a vision of what moderns browsers engines could turn into (focusing on the properties you have mentioned).
We should not let "them" dictate dumb clients (and apps) to us that consume from the fog that "the cloud" is.
This is a corporate trend that will take a lot of freedom from us, and if we let them have their way, it will be pretty hard to get us all out of this nightmarish future we are heading to.
For some reason that I fail to understand, it seems that most computer protocols and most computer file formats are designed under the unstated assumption that those using them will not have access to a computer.
How much did it though? Strictly speaking, an OS is an abstraction on top of hardware that applications are written against. That's it. OSes of 2007 did a good job of abstracting the hardware away from applications. They also weren't as user-hostile as the modern ones, so one could argue that they were better. Operating systems had been feature-complete for quite a while. Those yearly updates serve no practical purpose.
The original author is arguing there will be a point where there’s simply nothing new to add to the browser because it will do everything the OS does. There are any number of ways to argue just not how it works.
An aside: as someone who comes from Android app development and now makes a hobby project that includes a web interface, making the browser do what I want it to feels like I'm building a UI in a Word document, complete with macros. The ability to have this free-flowing text that isn't contained in anything, this whole concept of "inline elements" still hurts my head. I'm too used to real UI frameworks where any text is only allowed to exist inside a TextView.
If you were on Windows it was tricky then as you had to download and configure your own tcp/ip stack. That was possible due to the ingenuity of Peter Tattam who sold software that let you do that. I met him back then and to some of us he was a true rockstar programmer.
Of course there's a wikipedia article on this, which counts 10k+ web sites at the end of that year:
For most applications you would dial up a remote system anyway so many used the web by way of plain terminal software. The Lynx web browser had existed for just over two years by then and it wasn't the first web client.
For example, see "The Original HTTP as defined in 1991": https://www.w3.org/Protocols/HTTP/AsImplemented.html
It may not have been ubiquitous as it is today, but the web was definitely not out of reach for normal people back in '94.
What transformed the web was the launch of the Mosaic along with NCSA HTTPd - mostly in the backend of 1993 - it took off like wild fire - it's hard to remember dates, but I definitely remember the launch of Netscape Browser - which according to wikipedia was late 1994 - and at that point the web was already pretty popular.
Not just the web, but a large percentage of people had their own websites - it was ridiculously easy to do - just put an index.html in your ~/public_html folder.
Obviously this was all the in rarefied atmosphere of academia - and the computers were Unix work stations and most users were at University and largely PhD students at that.
I think it was Christmas break 1993 when a high school friend came to visit (he went to a different college) and started up Mosaic. I asked him what it was and then he laughed that I was still using Archie and FTP to find stuff.
I think it was xmosaic version 0.9 or something like that.
Your comment doesn’t seem accurate, it was starting to catch on pretty aggressively by 1994.
My first experience was sending an email with the url I wanted to access to a server that would send back the hypertext document a few minutes later.
Edit: Something like https://www.mcall.com/news/mc-xpm-1996-02-20-3077566-story.h...
In 2003, we got broadband internet at our house. Before then, when we only had dial-up, only a single computer in the house could use the internet. Broadband internet brought with it external modems you could connect to via wifi rather than sitting inside your computer , and this meant you could use other devices than the primary family computer to browse the internet, even assuming you had more computers in your household than just the one.
The iPhone came out in 2007, and is generally credited with ushering in the smartphone revolution. The iPad came out in 2010, making a somewhat larger form factor to enable the touchscreen interaction with the computer. From observation of my 3-year-old nephew, his ability to meaningfully use the computer via keyboard and mouse is quite limited. And I'm not sure he's all that capable with the touchscreen; he still hasn't understood that the reason he can't see me is that my phone doesn't have a camera and tries pushing stuff on the phone to make the camera turn on, inevitably resulting in him pushing the "end call" button.
I can totally buy a 10-year-old in 2007 who had never used the internet, especially if they came from a poorer household that didn't have the ability to keep up with technology and lived in a school district that didn't have the ability to give all their kids laptops.
 This does also show my age, as I never experienced the age of external modems connected via serial cables.
The age of 13 seems to be a common thread with a lot of developers -- I know of 5 others who will point to that age as "when they started writing software". It was age 13 when I started assembling and selling PCs (in the 90s) and began exploring software development and before my 14th birthday that I had re-written a BBS application in PASCAL.
Like you, I had been messing around with my (horrible) computer since it arrived in our house and I was presented with an `A>` with no clue how to proceed, but it wasn't until I was 13 that my Dad presented me with a plan and a book to fix that problem. I wanted a 386 so I could play Wing Commander (and see more than the 4 colors that my CGA display allowed -- loved the LSD trip that games like Kings Quest would put you through while trying to render decent graphics using practically nothing, but I was ready to be done with that).
I remember thinking my dad was insane -- he owned a business selling things to large automotive manufacturers ... nothing event adjacent to computers. And a computer was easily the most complicated consumer electronics device a person could own. Having had to fix our 8088 a few times, and being generally the most "handy person" I've ever known, his response was a dismissive "they're all just a bunch of parts that you buy and put together". And while I'm pretty sure he helped me build/troubleshoot the first couple of them (we had some bad components early on from a vendor in California), I generally built/tested/assembled all of the few hundred that we sold, myself. If you want a 13-year-old to behave more like a "Grown Up", well, when your Dad's friends are paying you thousands of dollars to build them a computer, you start to act like someone who's worthy of receiving that kind of trust.
The best part: we built the first several computers to pay for the one we wanted for ourselves. My dad owned his company. I'm willing to wager that he could have afforded to just buy the thing for me and probably wouldn't have felt all that guilty about spoiling his kid since I was using it 99% of the time to do things other than play games. But this served as a forcing function -- I got the exact computer with the exact specifications that I wanted without having to think about the cost of those things provided I could sell enough computers to purchase it. I did. And I got to buy a convertible for my first car with money I had earned.
I'm seeing it now, in my own kids, two of which turn 13 in the next few months. They're working with Roblox's developer tools making new worlds and I'm staring over their shoulder thinking: this application is easily as complicated to use as any of the design tools I've been exposed to and these kids are just ripping through it like it's "not work" (if only they were getting paid!).
I've been looking for things to motivate them to step out of the "game world" and realize that the things they're doing could be done outside of Roblox -- they could make real applications, but much like I didn't believe my dad when he handed me that book, they haven't had that "ah ha" moment, yet.
 My friends all had Commodore 64s and such that played really cool (if slowly loading) games and cost a few hundred dollars. We had a $3,000 IBM 8088 clone with a 10MB hard drive and floppy drive that ... wasn't fun.
 I can't remember the title completely but it was something like "Build an 80486 PC and Save A Bundle"
 My 80486, if memory serves, cost just shy of $10,000. We went with what would be called "server hardware" these days: SCSI over (very new) IDE or (very old) MFM -- we went SCSI with a full height 360MB drive (40MB drives were huge), a very nice video card and a 18" CRT display (14" was 'normal' with anything above starting at twice the price of the best 14" display). And two modems... I ran a multi-node BBS on custom software originally based on Telegard.
But there will always be special purpose browsers. There is "TOR browser" today, and we will have "vintage web browsers" the same way we have CLI/TLI browsers still today (thinking of elinks and lynx), or the same way as there are still FTP clients and Gopher clients.
You can get one issued for free just by demonstrating that you actually control the domain.
So yeah, even today there are use cases for custom CAs.
It also doesn't really have the problem of "unapproved" sites. You are already installing a certificate on the client. It's no extra work to add another trusted CA at the same time.
No, I can’t agree with that because it hasn’t ever been true. For its first decade, it was the place where most relevant standards were proposed, but that still didn’t define the web; you had to look at what browsers did rather than what specs said, especially in such days as IE’s domination, because it varied quite substantially from the “standards”, and more importantly people depended on these variations and extensions to the “standards”.
But then W3C tried to take the web further in a direction that the actual people with power (browser manufacturers) didn’t like—more XML everywhere, and little to no activity on HTML itself—and so they split and formed WHATWG which has since taken over progressively more and more of the actual important standards (e.g. HTML, DOM, URL, Fetch, Encoding), so that W3C itself is increasingly irrelevant, though there are still important things that are done through W3C, e.g. the CSS Working Group (CSSWG); and if you’re willing to stretch it, you could count the Web Incubator Community Group (WICG), but that’s more hosted by W3C than part of W3C.
Basically the W3C over-played it's hand. It thought by re-writing standards from HTML to XML it could force Apple/Opera/Mozilla to re-write. In the end, the browser vendors got together and created a new body that would be responsible for HTML. They've skirmished for control, but there's no question WHATWG and browser vendors are in ascendency. From 2007-2019 they both had competing standards of HTML, but since the browser vendors were on WHATWG they would just implement their own version making it the de-facto standard. So, in 2019 W3C announced they'd just use the WHATWG HTML.
I expect if the W3C pushes anything the vendors consider too arduous (eg a re-write of CSS into constraints), then WHATWG would create a CSS working group and the process would repeat.
This reminds me a lot of the power of the Supreme Court, specifically President Johnson's quote about it when they made a ruling he didn't like. "[The Chief Justice on the court] has made his decision; now let him enforce it" The stakeholders of WHATWG have an army (of browser coders), whereas W3C needs to rule by consent of the ruled like the Supreme Court does.
Yes I think that's the impression they try to create.
There will be three browsers: The "Safe Browser" which will be used by 85% of people, the "Smart Browser" which will be used by 5% of people, and the "Rebel Browser" which will be used by 10% of people, most of them in their teens and 20s. They will each have different skins (and t-shirts), but all be identical under the hood. The Rebel Browser will allow you to consent to accessing porn, and when the Smart Browser reports that it is the Smart Browser to sites its users visit, they will only be served articles, ads and search results that make them feel smarter than everyone else. The Safe Browser will have an "Important Business" mode that will allow you to consent to accessing porn, but it will examine your face first to ostensibly determine if you're a serious adult. The Rebel browser will also silently examine your face, and for the same reason - to track your porn preferences.
The day it will be feature-complete is when sites get an API for bricking your computer, which is routed through an algorithmic federal judge.
IMHO the web started to go down this path in the late 2000s. Recent developments (manifestv3, DoH, youtube-dl, etc.) are only further indications of the continuing trend towards increasing corporate control and dumbing-down of the users (so they become less likely to oppose.)
Average users value performance, cost, and battery life. The cost efficiency and capabilities of cloud compute will always outpace that of consumer mobile devices. So remote compute with zero user control is inevitable.
A stadia like experience for everything is what the overwhelming majority of consumers want- they just don't know it yet.
I visit a lot of the same pages I did 10 years ago, but now they take as long as they would have taken 20 years ago to render.
I don't think this is a question of what users value, it's a question of what large software companies value. Non-technical users absolutely value their privacy and freedom even if they don't put it in explicitly ideological terms: they want to pirate copyrighted material, they are annoyed by paywalls and ads, they are creeped out by how much social networks knows about them.
Web (browser) is the only cross-platform operating system, even if some standards say otherwise. I'm sure we'll see docker-style virtual web environments, cross-machine abstractions to simulate datacenters for distributed computing (GPU and such), PWA-style webapps bootable from USB sticks and functioning as operating systems, various crazy extensions for the upcoming neuralink-style tech, virtual reality frameworks, holographic displays and so on and so forth.
Physics is never complete at any point.
The real questions are:
- For how long do we have energy to keep not understanding things.
- Are the understandings leading to real improvements or just more total energy consumption (for every energy reduction there is an equal or more increase in total energy consumption)?
I'm commited to changing to a Raspberry 4 as my main computer with Raspbian (now called RaspberryOS, is that an improvement though) and Chromium before the year is over.
I wish new standards would consider all the effects before releasing, so we can stop reimplementing the wheel and actually build a bike at some point.
Reinventing the wheel for the browser has more or less stalled at this point, and it's sad because I'm all for that: http://move.rupy.se/file/wheel.jpg
Software really worked ok back in the C64 days too, the only real improvement since then is 3D and my position is that OpenGL (ES) 3 with VAO is good enough.
But you are welcome to use Vulkan, Metal and DirectX 12 if you want!
No, it's right the way it is. Physics is a science. Physics is the understanding. The universe is terrain and sciences are maps -- though some are more localized and specialized than others.
'Physics' refers both to 'our knowledge/study of the physical' and 'the physical' in itself.
I'm really struggling to think of a combination of definitions for "cross-platform" and "operating system" that can make this statement true.
Reminded me of microsoft saying they would no longer develop IE6 because browsers were done (i.e. netscape had gone bust)
I can easily see a world where Chrome/ium becomes too bloated to work well, and a "new web" of thin, light, browsers with drastically reduced features but increased speed appears. They'll work with "fat web" servers but throw away half the features (so we'll end up designing two versions of each website).
Eventually most of the functionality of the old "fat web" will migrate across - obeying the 90/10 rule (the 10% of features that provide 90% of the value). With 90% of the value, plus increased speed, everyone will move to "thin web" browsers. Chrome/ium will be left where ActiveX is now - some non-bleeding-edge organisations will use it because it has features that didn't make it to the "thin web" and replacing it is a pain, but it's obsolete and no-one really uses it.
So why has this been upvoted? Why is this on HN at all? It's ignorant nonsense. This should be obvious to anyone who's spent more than a few hours researching these topics. Please stop taking no-nothing randos seriously.
But that's not what people (including companies making SaaS, which are also stakeholders here), want from the web.
They want an app delivery mechanism mediated through a web browser.
So web browsers will be complete the same time when OS-level frameworks, UIs, and libs are complete, so when computing is "complete".
That is, never.
- HTML is worked on by WHATWG
- CSS is worked on by CSSWG
And IE wasn't killed by features moving too fast, Microsoft made Edge browser and still does. Mozilla hasn't been killed by Google or competition, it's been kept alive in large part by Google (I think in some years as much as 90% or more of its total income coming from Google) so they get hundreds of millions—the problem at Mozilla is bad management for the last decade.
I'm not sure how well this vision represents reality as it already is today.
The CSS WG is a W3C WG, and they're very much published as W3C specifications.
No, they don't. Edge nowadays is just a Blink shell.
IE was 2 browser families ago from Microsoft.
Edit: admittedly, "just" is probably underselling it a bit, but it's still a far cry from a completely new engine.
Parent said: IE wasn't killed because the web moves too fast -- and used the presence of Edge (or EdgeHTML) as the argument.
But both IE and EdgeHTML were killed because Chrome moved too fast for MS to be worth it to maintain its own engine...
It sure is a problem but I'd say Google's market dominance not being an issue is false. Remember when H264 became the standard just because Google decided it after what must've felt like a fakeout to mozilla. Then had to start working on a non proprietary implementation of what google already had ready.
This was back when people consistently complained bout firefox having fallen behind because a single youtube tab could make the entire browser lag and when chrom(ium) wasn't as dominant as it is today.
As I see it, the web as originally conceived was "ready" around 2003, and then again around 2009 when CSS media queries became mainstream to adapt to mobile usage. For me, the excessive push for browser APIs as a desktop replacement since then, and the proliferation of frameworks is a generational thing, driven not by technology but by big data, SaaS (prospect of extracting recurring revenue from customers, rather than one-time license fees), and a new generation of developers getting into the game by leaving the status quo behind, solely for their own economical benefit. Google has masterfully played this game, and let the idea of open web standards bend the minds of developers who thought they were clever. Moz partnering with them under WHATWG seemed like a good idea at the time, but just made the concept of a web standard arbitrary and "whatever Chrome does" over time, leading to Moz's demise and that of every other "browser vendor".
It'll take some stance and political will to reclaim the web.
Should a trifecta of corporates control resolution, it would be only natural for alternative roots. Over a certain market penetration threshold this could form a perfect moat. Everything that made AlterNIC and its ilk fail is irrelevant now.
Edit: here's a rendered markdown conversion that is easier on the eyes: https://shrib.com/?v=nc#CapeStarling-6v1L9J
But I agree in principle.
The only way we have to find things are to use companies that have managed to visit and index all the sites, not through some search and index interface but by processing the page that the human sees and then trying to rank them making the web ultimately heavily centralized.
Then there is the authorship meta layer that doesn't exist to interleave content from the same author, say by email or some other identifier. Instead we've centralized that into temporal structures that vanish when you look back, like in a dream in a "let 1000 flowers bloom" method where all you ever end up with are just a bunch of silly flowers.
Then there is the permanence feature where the internet is like a river that's always moving. Someone else has to rely on generosity to yet again, centralize the historicals so hopefully things can exist at least as long as bad ink on terrible paper. Otherwise a couple generations of thought will be in metal garbage heaps
I could have written all this, word for word in 1993 but instead it's 2020,
Honestly, I'd say solutions to 70% of the fundamental problems are still unsolved nearly 30 years later.
There are no hard dependencies between web sites and links fail all the time. The web didn't succeed in spite of this, but because of this.
The decision to use one-directional links was a good design choice in retrospect. It has also kept the web agile enough to keep evolving.
Is it even possible to change this? We did had (or still have?) backlinks in the blogging-world (trackback/pingback), but this is some active mechanism which simple does not scale well for physical reasons.
> things don't flow based on concept but based on this clunky chunk called a page
The base is a document, which describes the content and interface. "Page" is just the naming for some. Some others are called "App". What else is their?
> We still only have one functional semantic for cross linking the clunky chunks called the hyperlink.
We do have other formats than HTML. HTML had different approaches to include more stuff. Did not work for reasons. AT the end all dreams shatter on reality.
Are there any Web Page, render in Web Browser, both the page itself, and the standard itself, that gives the smoothness of say opening a PDF, or even better, a Native App. Does the Apple Store Web Page for selling iPhone felt as smooth as their Apple Store App on your iOS?
Jank ( micro- pause ) are still everywhere. Both the Browser and the WebPage itself. And this isn't Web App, just simple Web Pages.
I say web browsers ( and the Web Standards ) is far from complete.
I do agree about the "janks".
The question is what block the progress of making it complete, what kind of problem? I believe it's not a technical problem. Maybe incentive?
The problem is that so much of technology is standing on the shoulders of giants. If we re-write, how much of the giants do we decide to lop-off?
For example, could you get Google, Apple, Mozilla, to all agree where to draw the line? Should we still use HTTP? Should everything use QUIC? An entirely new standard? What about DNS? But let's say they do agree.
Then what, they create entirely new browsers (I'll call Chrome2, Safari2, etc). Maybe they don't even use the DOM or HTML at all, but an entirely new setup. A side note – if the web was re-built today it probably wouldn't have "View Source." As you said, I imagine it'd lean into WASM and "isomorphic" code, which could just be WebGL/canvas/some new paradigm/etc.
So then what... everyone says abandon my original website and download a new browser to access the new one? Because that's what throwing out backwards compatibility looks like. And that seems like a non-starter to most people.
Browsers are too big to fail; too big to be thrown out and re-written. Our hope is incremental, not revolutionary. Things like WASM in browsers let us pursue new paradigms while also being backwards compatible.
Lastly, I'd say the incentives for a new web would be worse than an old web. The original web was written by tinkerers and tech startups like Netscape/Mozilla. The new web would be written by multibillion dollar companies who like and know how to build moats. It would have microtransactions (not necessarily bad by itself) and I worry would look more like an app store.
Flutter was started by Chrome engineers asking the question "what would the web look like if we could strip out all the things that makes it slow without worrying about breaking compatibility?"
The usefulness of the browser is that it's a cross-platform platform. You can run the same web apps on Windows, Mac, Linux, even mobile devices.
Turning the browser itself into an OS-level platform will just create another platform that will need to compete with all existing platforms while having nothing unique to offer – after all, the same web apps run just fine on existing platforms, by their very design.
What does ChromeOS have that MacOS or Linux do not have and can't have?
You could argue that someone could build a competitor to iOS and Android using ChromeOS, but why wouldn't the same someone build a competitor using... Android? It's open source too.
Because if they take Android source without Google's blessing, they would be left without Google maps, docs, and gmail. But Google can just as easily exclude mobile browsers from accessing their web services. In today's world it would be perfectly fine for Google to only offer their services through native apps on mobile, and it's up to them which platforms they choose to publish those apps on.
Oh and then there's Widevine. You need Google's blessing to decrypt that, and even if you get it, every content provider that uses Widevine DRM can block access to their content from your browser for any reason (you can't bypass that).
How far will your rebel browser platform go without being able to play e.g. youtube? Currently they only DRM certain types of videos, but what would stop them from flipping the switch and applying DRM to every video that is monetized on youtube, and blocking your browser on the basis that it allows adblockers that effectively prevent funding of those videos?
The notion that Google is somehow – anyhow – in a weak position is utterly ridiculous. Their moat is stronger than ever.
Local market is more than enough to keep ones busy and profitable.
See how many companies in Africa, South America or Eastern Europe countries actually care to buy Apple hardware to test Safari.
> If you refer to hundreds of things or people, you are emphasizing that there are very many of them.
Happy Firefox user here since Netscape Navigator days.
I am an happy Firefox user, but don't fool myself, without Safari, Web === ChromeOS.
Agreed. I always though that Chrome and ChromeOS were created so that Google could ensure that a platform always existed for their real products (which are offered over the web).
Web standards still have some glaring gaps that I'm not sure the current approach can tame. I propose "the web" be split into three standards:
2. Documents (existing HTML may be good enough with minor tweeks)
3. Productivity: CRUD and Data oriented interactive GUI
By splitting, each standard can FOCUS on doing what it does best rather than the watered-down one-size-fits-all we have now. A given browser may support all 3, somewhat like how Java Applets and Flash could run inside of HTML documents. (They may be pluggins and/or independent browsers.)
But unlike Applets & Flash, the standards wouldn't try to be full virtual OS's. Dump as much on the server as you can to keep client standards lean, and reduce the need for scripting by including common behaviors.
For example, a common GUI idiom is for a button to activate a window or form. Rather than rely on scripting, that action could be built into the GUI markup:
Click to open My Window
<button label="Click to open My Window et. al.">
<setFocus target="anotherWindow" modal="false">
We're already seeing this: consumers prefer native apps on their phones for virtually everything: shopping, socialising, reading news.
Surely the future us will laugh at web browsers as some ancient tech that used these weird markup languages to layout content that's monetised with ads into pages that you'd navigate back and forth in order to emulate an application?
New web browser engines are now impossible to implement due to the sheer size of specs. Surely we are nearing the end game?
Am I really that old?
Something that was stated, but not directly, is what's driven a lot of the latest additions to the browser: Cross-platform development.
The operating system provided the abstractions around various hardware implementations in the past, which meant being able to (possibly) run your software, if re-compiled, on different hardware platforms (it reality, some OSes weren't available except on a single platform and others were frequently not straight forward to get compiling on different platforms).
The browser is providing abstractions to the same hardware around various OS implementations. Add in WebAssembly, and you can pick from a number of languages -- including those that were traditionally used to write apps not designed to run in browsers.
But to the quote: I was cleaning out my basement, which quickly became a "treasure hunt" of sorts since I've been storing things that I took from my parents' house when I moved out.
My favorite find was a CD-ROM that was basically a CD-based Search Engine of the Internet. It was "Designed for Windows 95" and was probably a free disc given to me by a vendor rep when I worked at CompUSA in my teens. I'm tempted to see how many of these actual sites still exist (I'm sure none of the deep links do). Incredibly, these things probably sold very well back then. It even came with a browser (neither Mosaic nor Netscape; no clue who's it is).
But operating systems continue to innovate/improve/change at roughly the same rate as browsers these days, no?
So it seems like a total logical fallacy that the browser would ever be complete. People always come up with new ideas and better solutions for things users need, new hardware arises that needs software support and input, etc.
I know the essay is about more topics than just documents vs apps. Still that shows to me that "The principle of least power" doesn't easily apply to technology choices of web users.
GitHub was probably more convenient for the author than uploading a txt file somewhere.
> We can logically follow this to a future of a
"split Web". There's going to be the "Google Era" Web that will live for a
long time, and another iteration of the Web where us technologists will try
our best to create something more to our liking.
All human endeavors tend toward Towers of Babel. Alternates shall arise and topple Google.
Honestly I'm not sure what I'd miss if I had to go back to Fx4.
The idea of "browser as OS" or "OS as browser" is not that new. That move started by "active desktop" concept on Windows 95 (https://en.wikipedia.org/wiki/Active_Desktop).
And speaking about browser... It's main goal is to serve following tasks:
1. To provide safe browsing experience.
2. To present content to the user.
Note that the order of these two tasks matters.
Initial active desktop effort has died because order of these two tasks was wrong at their first attempt.
From that point of view, "browser as OS" just increases attack surface. And so more and more isolation layers will be added into browsers making them less effective and performant than alternatives.
Consider WebAssembly for example. That was desperate move to provide an option to run some computation-heavy code inside browsers with at least comparable speed of native code. But it always be slower than native solutions. Java vs native code, we've been there, seen that.
To be short: native OS and solutions will always be more performant than the one emulated in the browser. Just because of the order of tasks.
In this sense the browser will never be complete. It may become asymptotically close to the OS but will never reach that point.
And note the price: on that way it will become asymptotically close to the size of OS.
As soon as you will have better GPU you will have better native native games that use HW at maximum extent. They will work at the magnitude of times better than what WebGL emulation layer will offer you. So hi-end games will always be native.
There will be no web browsers in the future!
What comes after web browsers? Only exciting times...!
"When will car technology be complete?"
The answer is they wont be in short term, and they will be made obsolete in the long term.
In parallel, we've replaced sane controls with frustrating and dangerous touchscreens, we've added countless types of comfort features, we've improved security, we've computerized engine controls, we've DRMed the car - ostensibly to prevent theft, but actually to route more money from aftermarket towards the manufacturers. Etc.
Cars do suffer feature creep, and not all of the features are beneficial to their owners. Just like with the web.
Our expectations about cars have definitely changed over time. I don't think it's even legal to sell new cars without seatbelts, something that did not exist when cars started. Even low end cars in the US have added features compared to low end cars of the past, such as power windows, power steering, power mirrors, and so on.
It's true that adding features is slower in cars, but that is because cars are partially hardware. It takes much longer to add features to hardware, because making copies is not free. But that does not mean they are unchanged.
The author states this in a way that makes it sound like it's possible, but that betrays an assumption that operating systems aren't moving forwards and innovating as well. The web will always lag behind the new features that come to OSs, and therefore browsers will always need to be changing in order to give web users access to those features.
At some point we will cut out the store publishing step for certain classes of same apps running in safe runtimes (wasm with some native bindings with safe data management).
Then what is the difference? You might say compatibility across platforms. But compatibility really is broken in browsers today. So....
(I also have to point out that WebApps still mostly suck - and are they even used yet on mobile/wearables/VR ?)
Simple as that.