Hacker News new | past | comments | ask | show | jobs | submit login
WebKit: An Objective View (robertnyman.com)
57 points by rnyman on Feb 14, 2013 | hide | past | favorite | 21 comments



Because of the news, I downloaded and tried out Opera again last night. It was the first browser I've switched to from IE6, before Chrome came and swept me over with its minimalisic UI and amazing omnibar.

There is still one thing I miss from Opera to this day.

Pretty much instant go-back/go-forward times. Go to any static page, then another, then back and forward is nearly instant in Opera.

In Chrome, pages get reloaded and can sometimes take from 1 to 5+ seconds if some server lagged out. Which is ridiculous if you just want to go back to where you _just_ were a millisecond ago, because something caught your attention after u had already clicked on some link. Why don't they keep these things cached?

So my question is: is that a browser-level feature or rendering engine-level one? Will Opera preserve it? Will it finally come to other browsers?

Note: Safari has recently added the two-finger swipe that reveals the previous/next pages. However, they're using cached images while the actual page gets loaded. Better than nothing, but I still prefer Opera's true blazing speed here.


Opera is pretty aggressive and intelligent with caching, which gives it a reputation of a memory hog on benchmarks, but it apparently scales back in resource-constrained environments. To my understanding, Opera caches not just the page code, but the parsed & loaded DOM tree of recent content. This makes for fast go-back times, as well as (sometimes) saving the page state on dynamic pages.

That "ought to be" part of the browser, not the renderer, since it's hanging on to content that the renderer is already done with. But I'm not familiar enough with either architecture to answer for certain.


This. I've got a mobile device with 32GB of unused space, but Android wants to reload from the network? My laptop has 1TB, but Chrome wants to reload from the network when "restoring all tabs"? Look Chrome, I'll tell you when it's safe to refresh, mkay?


If your browser started storing everything on your harddrive your computer would likely come to a crawl. The problem is RAM space, not HD space.


WebKit project consits from four components:

  - WebCore - the rendering engine
  - JavaScriptCore (aka SquirrelFish or Nitro) - JavaScript engine
  - WebKit - synchronous API
  - WebKit2 - asynchronous API
All WebKit-based browsers are using the same rendering engine (that is WebCore). Only some of them are using other WebKit components.

Google did not fork WebKit to implement Gamepad API and WebGL, those features are implemented in upstream WebKit project (check http://goo.gl/5M9gN). Safari does not support them probably because of its longer release cycle.

It's true that there is no single version of WebKit though. Browser vendors are free to take any SVN snapshot, replace any component, toggle on/off any feature and brand it with any version number they prefer.


>Safari does not support them probably because of its longer release cycle.

Safari doesn't support them simply to avoid having Mobile Safari apps compete with the App Store. Same reason there's no in-DOM audio playback, IMHO. iAds are allowed to use WebGL, but not ordinary web pages.


I believe at least in Android and iOS the browsers do share the same components.


Prefixes need to move away from vendor names and towards signature code names. If Gecko and WebKit both support the same signature, it should be one line of code to call that function in both browsers.

-bananaPageStyle: 2up, turn; -pineapplePageStyle: turn, 2;

I don't care which browser you have so long as it understands the syntax for the beta feature I want to use.

This also addresses the fear of a WebKit monoculture on mobile. If Firefox implements the same signature, it will behave correctly on WebKit first pages without doing something silly like changing its prefix to webKit.


I'm somewhat confused. I'm not into the whole webdevelopment thing, but what is so terrible about all browsers sharing a common implementation of at least the basic features that pertain to the alignment, orientation and sizing etc. of elements on a page?

Browsers can still differentiate themselves through a faster JS interpreter or new fancy teleconferencing APIs all they want.


> I'm somewhat confused. I'm not into the whole webdevelopment thing, but what is so terrible about all browsers sharing a common implementation of at least the basic features that pertain to the alignment, orientation and sizing etc. of elements on a page?

If everyone shares the same underlying implementation, then standards are meaningless: The standards body can say whatever they want, but in practice all that will matter is what actually runs on the single implementation.

If there is a bug in that implementation, everyone will code around that bug. If there is more than one way to implement a feature in the standard, people will code for the way in the single implementation. And if there are extra non-standardized features in the single implementation, people will use those.

So the standard will be meaningless. Does that matter? In the short term perhaps not. But the only reason WebKit could achieve what it achieved was that there was a strong standard. IE's dominance was broken and multiple browsers existed, and WebKit could render the same pages by supporting the same standard.

It was hard to break IE's dominance, and if WebKit becomes the new single implementation, it will be similarly hard to make it possible for a new browser to show up.

Does that matter? Well, when there is a standard, you can write a better codebase that renders the same content. WebKit, like all other current browsers, is written in C++. That language is less secure and parallel processing friendly than other languages. If someone wants a better browser than WebKit by using another language, that will only be possible if there are standards for web content - otherwise, it will be a chase after WebKit specific bugs and features. Bug parity/bug-for-bug compatibility is extremely difficult to achieve, and becomes harder as the web includes more and more content. So it will be harder to break a WebKit dominance than an IE one.


> If everyone shares the same underlying implementation, then standards are meaningless: The standards body can say whatever they want, but in practice all that will matter is what actually runs on the single implementation.

How is that different from the case today? The web is moving much faster than the Standards and their inaction has lead to their dwindling importance. The W3C hasn't even ratified HTML5 yet (their hoping to do so in 2014!).

Within the time frame of the W3C discussing HTML5 Google started Chrome and has shipped 24 versions. Not only that, but they aren't waiting around for the W3C to vote and have shipped a ton of emerging tech: WebGL, WebRTC, NaCL, Web Speech, Web Intents, etc etc etc.

How is this bad? We're building stuff now instead of waiting around for a decade.


> WebGL, WebRTC, NaCL, Web Speech, Web Intents, etc etc etc.

The difference is that all these APIs, with the exception of Native Client (which is unlikely to ever take off beyond Chrome), either are standardized already or have standard drafts being worked on. Mozilla does not have go to hunting around in WebKit code to figure out how to implement them.


> have shipped a ton of emerging tech: WebGL, WebRTC, NaCL, Web Speech, Web Intents, etc etc

Except for NaCl, those are standardized technologies, and almost all have multiple implementations (generally at least Chrome and Firefox). It is easy for other browsers to implement them because there is a spec for each, and the multiple implementations of it have hashed out ambiguities and issues in the specs.

If there was just WebKit, that wouldn't be the case.


At least as far as I'm concerned, it's that that implementation is (a) written in a very unsafe language (C++); (b) is not particularly amenable to multicore CPUs without major rewriting.


Having a common implementation is extremely important, for instance a common implementation of HTML.

The problem arises with regards specifically to control and who has it. As others have mentioned if one entity has complete control then standards committees will be beholden to the controlling organization, effectively making them meaningless.


I don't think that even faster JS engines would improve overall performance of web apps significantly. DOM performance seems to be the bottleneck currently and another arms race in this area would be beneficial for the web.


Can't Opera open source the Presto engine? If they want to contribute some of their ideas for Presto in Webkit, then open sourcing it could mean others will be doing it for them, if they were really good ideas.


One of the Opera engineers has commented on this. They did talk about it, and concluded that it'd be about a year of work to usefully open source Presto. It has licensed third-party components that would need to be replaced, so they can't even just do a code dump.


They could just release an unusable copy without the third-party components, and leave the replacement to the community.


You're assuming those components are cleanly separated from the rest of the codebase. Especially if they were modified, this is not necessarily easy to determine.


Found the grammatical error "it's own." Immediately assumed author was uneducated and closed the tab.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: