Hacker News new | past | comments | ask | show | jobs | submit login

> None of this precludes a changing of the guard in the future. Just ask the gcc guys about egcs and llvm…

By all means, ask the LLVM people about all the work they had to do to overcome the single-implementation status of gcc. clang must support gcc's arguments and behavior very carefully, and still cannot build all open source projects, simply because so many open source projects - including the linux kernel btw! - have been designed with only gcc in mind.

LLVM managed to overcome that through a lot of effort. LLVM is funded by Apple, a massive multinational, one of the largest tech companies in existence and of all time. Not all new projects have that luxury. In an ideal world, you don't need those kind of resources to challenge an existing implementation.




LLVM was created by one person, just like Linux. Both projects would not be where they are today without the efforts and monetary contributions of many others, including corporations. These are examples of the system working, IMO. Giants can be felled, tools and infrastructure can be improved for all.


If by "working" you mean linux succeeding through huge amounts of funding by IBM and others and LLVM through huge amounts of funding by Apple and others. Yes, both were founded by one person, but that is highly irrelevant here.

Both of your examples clearly show that it takes huge resources to overcome a single implementation in a field. That is far from optimal, it means the barrier is so high that innovation is being stifled.

As another example, look at the single-implementation status of Microsoft Office. Despite huge investments and efforts by multiple parties in the industry, it remains essentially unassailable.

The best way to avoid that is to not have a single implementation, but rather to have standards, and to have good open source implementations of those standards.


> Both of your examples clearly show that it takes huge resources to overcome a single implementation in a field. That is far from optimal, it means the barrier is so high that innovation is being stifled.

I think it's more optimal than the alternatives tried so far. You're ignoring the "period of peace" between upheavals during which (almost) the whole world is working together to make something better for everyone. That more than makes up for the difficulty of dethroning (or forking) the king when needed.

Office is a closed-source product controlled by a single company, not analogous at all to WebKit.


I don't disagree that there is a benefit as well, to a monoculture. It does avoid redundant effort.

But the cost is quite high.

10 implementations might be a lot of overhead. But a monoculture of 1 is too little. 2 or 3 might be an optimal number.


We already have two or three powerful entities working on WebKit, pulling it in whatever directions suit their needs. If they ever pull hard enough or far enough in different directions, it could tear (fork) and the cycle begins again. And anyone is free to learn from WebKit and create something better (as Apple learned from Gecko before adopting KHTML).

"Monoculture" is a loaded word. The differing priorities that might manifest in completely separate web rendering engines still have plenty of room to manifest when multiple big players are working on WebKit, with nothing stopping any of them from forking if the differences get too large.

(And anyway, Gecko does still exist, after all…)


With a very optimistic outlook like yours, there is nothing to worry about: Everything will work out, these are just cycles in the industry. What could go wrong?

But we already see problems today from WebKit's dominance on mobile. Non-WebKit browsers have trouble rendering the mobile web which was designed with only WebKit in mind. It got so bad that Opera just gave up and adopted Chromium (not even just WebKit).

The remaining non-WebKit browsers, IE and Firefox, are left with an even bigger problem and it is even harder for them to disrupt the WebKit mobile web. And it would be even harder for a completely new engine.

So general arguments about cycles and all that might sound good, but we already see the damaging effects of WebKit monoculture (you argue it's a loaded word, but it fits).


I'd say we already see the benefits of having so many people work together on WebKit. Would we be better off if Google had to create and maintain its own desktop and mobile web rendering engine from scratch? If there's a monoculture problem in mobile web browsing, it's due to the disproportionately large percentage of it that's done using Mobile Safari on iOS. Single-vendor/closed-platform will always be a problem. WebKit is neither.

Yes, over-use of vendor prefixes in CSS and other browser-specific features is bad. But that's an authoring issue as much as it's a WebKit issue. Having -moz-, -o-, and -webkit-* plus JavaScript shims to hide the differences in multiple browsers is a great argument for standards, but not a great argument for a larger variety of independently developed and maintained web rendering engines.


> I'd say we already see the benefits of having so many people work together on WebKit.

Of course, as I already agreed before. There are benefits to centralization.

It's a question of degree, not absolutes. As I said, 10 or 100 might be too many rendering engines, while 1 is too few. 2 or 3 seems, to me, to be optimal, but again this is a matter of degree so others may prefer a little more or less.

> Yes, over-use of vendor prefixes in CSS and other browser-specific features is bad. But that's an authoring issue as much as it's a WebKit issue. Having -moz-, -o-, and -webkit-* plus JavaScript shims to hide the differences in multiple browsers is a great argument for standards, but not a great argument for a larger variety of independently developed and maintained web rendering engines.

Agreed, this is not just a WebKit monoculture issue - plenty of other problems in that area as well, as you say.


In general, I agree with you. But in this specific case, we may not be able to extrapolate Linux/LLVM's success to web. Yes, you can create a toy kernel for yourself. You don't have to be POSIX-compliant from day one, or run OS/2 apps, or whatever. And you can create a more standard compiler that's not capable of compiling most programs that use gcc's esoteric features. They can have little niches for years, and slowly gain traction (from end-users and also developers).

But I don't think that you can do the same in browser space. If you want to create a new rendering engine, it absolutely, positively has to render 95+% of most-visited websites from early stages of development (before you "ship" a browser). Nobody would use a half-baked browser that's unable to render most websites. So, you have to also support WebKit's bugs-turned-into-standards.

In another words, you don't compile 500 different programs in a single day - if LLVM can compile the one program that your company is developing faster and better, it's a good fit for you. But you visit hundreds of websites a day. If a new engine can't display even 10% of them correctly, it'd a show stopper.

So, your choices are to either fork WebKit, or create a new engine that "simulates" most mainstream WebKit engines. Both result in WebKit becoming more and more of an standard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: