Hacker News new | past | comments | ask | show | jobs | submit login

nearly all of those APIs are also considered 'harmful' by Mozilla[1]. Some have even been disabled after implementation because of this[2]. These were developed by Google for Chrome OS, and besides the privacy issues, they substantially increase attack surface for security vulnerabilities.

[1]: https://mozilla.github.io/standards-positions/

[2]: https://developer.mozilla.org/en-US/docs/Web/API/Battery_Sta...




Mozilla also killed WebSQL because the existing implementation was too mature...

I don't know what they're driven by, but it's not pragmatism.


There is too much opinion in your statement.

Mozilla opposed it, rightfully so, in that it would dictate that SQLite be the implementation used everywhere. Mandating the inclusion of SQLite is not a spec.

As much as I like SQLite and looked forward to it being in 2/3 of browsers, Mozilla made the right call. The web should be implementable entirely by the specification.

Google likes to define the spec as the identity function of the implementation. Popeye specs, "I yam what I yam and dats all that I yam".


WebSQL would have been a spec, could have been a living spec too. Start out with SQLite in all the major browsers, and then gradually have them diverge. Blink and Webkit started the same way. Independent implementation does not mean "implementation of uncommon history".

But somehow "paving the cowpaths" doesn't apply to tech that they don't find attractive.

Similarly, and that is actually a statement loaded with opinion, I've seen way to many self proclaimed "spec hackers" at mozilla. People who relish in the joy of writing out ideas, I mean who doesn't love building castles in the skys, but who completely ditch the implementation. It doesn't matter if you have the most beautiful spec in the world if the implementations are shoddy, or if it specifies the wrong thing.

Web specs are the modern hackers "waterfall" design process. Sure everybody talks a lot, and there are many pretty documents that come out of it. But once you start implementing the stuff, you start to realise that all your assumptions were wrong, and now you've made a mess.

I think specs actually produce less diverse implementations. Because they are so easy to write, in comparison to code, and because writing them doesn't give you immediate feedback on when you've reached a good minimal feature set, it's almost inevitable that you end up with way more stuff than you actually need. There is a reason that there are essentially only 2 Multitrillion dollar companies that can keep up with that mess. And mozilla would have died long ago if google wasn't keeping them alive to avoid anti-trust investigations.

In all fairness Living Specs try to acknowledge this, but somehow we still collectively pretend that they are more than mere documentation, that by calling them a "specification" instead of "documentation" they somehow make the web run.

Specs don't run the web. Code does.


> WebSQL would have been a spec, could have been a living spec too. Start out with SQLite in all the major browsers, and then gradually have them diverge. Blink and Webkit started the same way. Independent implementation does not mean "implementation of uncommon history".

You need to think about the barriers to implementation: if everyone ships SQLite, developers will inevitably write code which depends on that exact implementation and anyone shipping something new will need to copy it - including unintentional behavior and bugs – to work with existing sites. That is extremely expensive and might lock in something we’re going to regret later if someone finds a behavior which wasn’t intended for this context and has security or performance issues.

Anyone working on the web should be especially sensitive to this since we came close to having the specs for all web technologies be “whatever IE6 does”.


How is that different from what we have now?

Living specs don't give any guarantees, yet they still "pave the cow paths" while keeping ridiculous bugs and behaviours for backwards compatibility, and breaking existing specs for convenience. It all depends on the person dealing with the problem.

Nobody expects compatibility with existing specs, why should they for WebSQL? Especially when it's a living standard.

If those things were true, we would all use the same browser by now and never see new standards, and Blink and Webkit would have never diverged from another.

Open source quarrels basically guarantee a steady supply of competing forks.


Also SQLite wasn't designed to run untrusted SQL code. It's an embeddable SQL engine, not a web SQL engine.


That doesn't make it any less of a solid foundation for a web SQl engine.

It's like getting gifted a car, and complaining that you'd rather have winter tires. So you start building one from raw metals.


> Start out with SQLite in all the major browsers, and then gradually have them diverge. Blink and Webkit started the same way. Independent implementation does not mean "implementation of uncommon history".

The point is that they would clearly never, ever diverge. Any sqlite quirks (of which there are plenty) would be enshrined into the backwards-compatibility requirements of any browser that used it. Plus building a database isn't simple - so why not just use sqlite? Setting out to fork or rewrite sqlite is not a task that makes any sense.


The exact same argument could have been made for Blink and WebKit, which didn't turn out to be true.


How would you migrate to a different SQL implementation? It would have to be 100% SQLite compatible in the early days because that's what all websites would expect. It makes migration nigh impossible.

That said, as long as the SQL implementation they choose is free and open source I'm not sure this is such a bad thing. I mean we are also stuck with Javascript in the browser and that hasn't been a total disaster. The whole point of standardization is to choose one particular solution and have everybody use it.


I think your second argument also applies to the first no? Any technology that is implemented in a major browser, be it JS, Webkit, SQLite has incentives to port it to other platforms. Web developers don't expect 100% compatibility, they are so used to things behaving differently and broken across browsers that it's actually surprising if something just works from time to time.

If anybody was expecting 100% compatibility all the time, we wouldn't get any new standards, and would all use chromium.


There is a lot to unpack in your post, but I get the gist.

You are free to use SQLite on Wasm, in your browser, you break no one and no one breaks you.

Wasm was designed well from a spec and community perspective, Google matured and Mozilla matured and in the end all the browser vendors go together and designed something that lots of folks can implement w/o multimillion dollar development efforts.

You know, I have written web apps that use SQLite and Lua running in the browser. They shouldn't be included inside the browser and nor should browser vendors have to worry about it.


Well that's kind of a different argument. But one I can get well on board with.

We should kill JS, and EVERY WebSpec, except for WASM and WASI. Take the best parts of html and css and implement a virtual dom / immutable data driven document format for WASI.

Focus all our efforts on carving useful capabilities for WASI and end this web nightmare once and for all.

Not realistic, but a man can dream...


It is realistic and at some point, one of the browsers will be a shell that runs Wasm and browser updates will just be Wasm.


The birth and death of javascript


I was under the impression that the "by specification" idea was generally tossed out with HTML 5, where the specification started to describe the current implementation. And this was cheered by everybody. What has changed?


The specification describes what implementations should do to be interoperable. As opposed to what someone wishes implementations were doing but has no hope of convincing them to do, which was the major change with HTML 5.

But the fact that there are multiple implementations remains. and it remains a goal that one should be able to create a _new_ implementation by implementing the spec. Notably, this goal was not achievable with the pre-HTML-5 specification.

In the specific case of WebSQL, if someone were to actually create a specification for it that didn't boil down to "run this exact version of SQLite and pass things on to it", that would have allowed for the "possible to create an implementation from the spec" goal to be achieved. But no one ever stepped up to do that.


>because the existing implementation was too mature.

That's not what I gathered from their official response to the deprecation[1]. But the major problem with WebSQL for Mozilla seems to be this:

>We don’t think it is the right basis for an API exposed to general web content, not least of all because there isn’t a credible, widely accepted standard that subsets SQL in a useful way. Additionally, we don’t want changes to SQLite to affect the web later

edit: and once again: security might have been a deciding factor, too[2].

[1]: https://hacks.mozilla.org/2010/06/beyond-html5-database-apis...

[2]: https://news.ycombinator.com/item?id=18685296


Yet years later there is still no good solution for that space and IndexedDB is a total clusterfuck.

I'd be far more worried about the mess at the core of the web, css and rendering, than about exploitable bugs of SQLite. The fact that a RCE in SQLite is HN worthy is indicative of that. Browsers have tons of RCE that are fixed every year, but it happens silently because everybody is so numbed to it.

The quoted argument is a copout of them. HTML is also a "Living Standard" a.k.a. we just implement whatever we feel like, and write it down once we feel like it has stabilised a bit. They could have done the same for SQL, but NoSQL was en vogue at the time so they pretended that SQL needs to somehow hold up to much higher standards than the usual mess they produce.

SQLite is probably one of the few pieces of software that is actually trustworthy, unlike the dumpster fires of C++ and feel good essays, that we call browsers.


Web standards are meant to have multiple independent implementations. That’s pretty much the entire reason that Google pays for Firefox at this point. “Everyone should just use SQLite” is a slippery slope to “everyone should just use Blink”.


Blink is exactly a good example on why starting out with SQLite would have been a good idea.

Blink is a fork of Webkit, an engine soo much better than the alternatives, that it almost over-night became the de-facto standard.

Did webkit ruin the web? Eventually apple and google disagreed and blink was forked of off webkit.

The same thing would have happened to SQLite as the foundation of a living WebSQL spec.

It's ironic that Mozilla pushed IndexedDB through, yet they were among those too lazy to provide their own implementation. Instead they simply dump everything into SQLite, same strategy done by Apple. They left it to google to implement the only differing implementation based on LevelDB.

But hey, it's totally important to have multiple independent implementations...


There are at least three implementations of IndexedDB. Two are built on top of SQLite, but are different codebases aside from that, as far as I'm aware - I don't think Mozilla just merged in Webkit's IndexedDB implementation. Firefox's implementation came out well before Safari's, for a start.

What would be your opinion if everybody just decided to use Google's implementation of WebRTC wholesale, rather than building out their own systems? What if Mozilla decided to rebase on top of Blink, and give up on building its own rendering engine, tomorrow?


I wouldn't mind if they realise that their implementation is bad enough to be replaced, it would be a win for all. The implementations would start to diverge again anyways. The engineering time saved by piggybacking of a working implementation can the be reinvested into improving the fork massively, or into simplifying the design and specification.

Like I said, webkit and blink are the perfekt example for this.

As for the first point. It doesn't really matter if they have their own glue code, all the important parts are shared.


Right, so the web was a wonderful place when IE6 was the only browser anyone developed for, and for the short period of time when Chrome was the only browser anyone developed for. This definitely didn't affect anyone's ability to choose a browser which met their needs, and definitely didn't result in half-baked and overly complex specifications being forced through the standards process by the only browser vendor with any power.

If Google got their way, we'd be shipping modified LLVM bitcode to clients ("PNacl"), and every browser would be shipping some random fork of LLVM stuck in the past forever. If Microsoft got their way, GMail would be an ActiveX plugin.

Gecko has massive improvements over Webkit/Blink, btw - WebRender is huge.


You're forming a false dichotomy, and you're mistaking competition of code with competiton of institutions.

Having different organisations with different goals is what prevents these scenarios.

Otherwise the webkit blink thing wouldn't have been successfull.

Mozilla could also have forked blink and started replacing it with rust, and you would've gotten the same improvments.

Mozilla could have taken SQLite as a foundation, started a living spec, and immediately begun translating the codebase to rust. The effort would have been the same as for their half assed IndexedDB stuff, but the result would have been much better.

It doesn't matter where a code base comes from, it matters where it goes to. And when it comes to diversity of implementation the repelling forces of different ideas, viewpoints and aesthetics that normally result in dreaded project forks, work for the advantage of all in browsers.

Conways law: The software of projects reflects their social structure.


"Rewrite it in Rust" is not the only difference between Gecko and Webkit/Blink (which are still similar enough that they might as well be one codebase), and believing so is showing your bias. WebRender, for example, is not simply "rewriting part of a renderer in Rust". There's significant differences between how Gecko and Webkit handle media under the hood. And both have pushed various specifications that would be easy to implement in one but not the other. Google are, admittedly, much better at being incredibly loud about "standards" they try to force through.

In theory, the purpose of a standard is to allow other people to implement it, from spec. The spec cannot be "just use this existing codebase". Otherwise we'd have one HTML parser that sits entirely undocumented, and the HTML spec would be "do whatever libhtml does" - we've seen that in the form of OOXML. The media streaming spec would be "just use this binary blob from Adobe, or you can't do video at all". If I came along today, and wanted to implement WebSQL, which is entirely specified as "do whatever SQLite does", from scratch... how exactly would I start? In theory, right now, with enough time and money, I could implement a javascript interpreter or html renderer or whatever else without ever referring to any other browser's source code or depending on anything - a clean room implementation. Some companies still actually do that, because Webkit, Blink and Gecko don't meet their needs and wouldn't without a complete rearchitecture. Imagine if the javascript spec was "just do whatever V8 does", and we could never get things like QuickJS or Duktape.

When I, a web developer, come across something that looks like a bug in the One True Codebase, how do I know whether it's a bug or something someone forgot to document properly? What if that bug isn't present in another implementation? Do we have to be 100% bug compatible with some arbitrary version of SQLite/V8/Blink forever? Getting rid of most "quirks" was the best thing to happen to the web from a developer perspective in a very long time, IMO.

What about when someone comes along and suggests something that would work really well in the One True Codebase, GeBlinKit, but it turns out that nobody else with a different code design could reasonably implement it?


You're really bringing up false dichotomies all the time.

Nobody ever argued that it was about the programming languages or equal implementations, but about project stewardship and diverging code bases. They influence each other, it's not only about one or the other.

I don't know where you get your weird 100% bug compatibility idea from, that's literally how nothing is handled anywhere. This is also orthogonal to specs, you can have specs that completely dictate specifications (like CORBA) or that are super loose in what they allow (ANSI C).

There are not only reference documents but also reference implementations, as projects grow it's ok do diverge from them, and find common ground in other documents like specs. Sometimes they cover reasonable behaviour so well that they can work as an alternative to a specification, like sqlite and https://sqljet.com/ . That doesn't mean that they'll never change, SQLite regularly has bugs discovered and fixed. If the SQLite devs don't even adhere to your assumed "aLL bUgS aND BEhAViOUrs aRE SAcrEd AnD MUsT Be KEpT InDeFInitElY" philosophy, why would anybody else?

As if there is some kind of weird rigid black and white process involved with these complex projects, that is either good base implementation and no spec ever in the future with 100% backwards compatibility, or waterfall spec development followed by implementations that asymptotically approach the spec.

Where theres a will theres a way, these projects and documents are all about people and the ways they collaborate and work. It's not as rigid as you make it out to be.

>When I come across a bug how do I know whether it's a bug or something someone forgot to document properly? What if that bug isn't present in another implementation?

You do what you currently do. You go to a place where the people that steward the project reside and you ask. Why and how do you think specs get revised? They contain ambiguities, bugs, and unspecified behaviour. Somebody stumbles upon it, and asks a question.

>What about when someone comes along and suggests something that would work really well but it turns out that nobody else with a different code design could reasonably implement it?

You'd do what you currently do. You talk about it, and in the end you might even write it down somewhere, in a spec, in an rfc, in a piece of documentation.

You seem to think that SQLite would stay the reference implementation for ever, which is simply not true. It's a good starting point yeah. But webkit didn't stay the reference implementation either, nor did netscape.

Don't be so rigid.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: