This is an article based on a comment(!) to a NYT article. The comment itself is just a repost of the original (and now discredited) Oehmen "why I'm not worried" blog post. So this BI "article" is a repost of a repost.
Yeah, that doesn't agree with http://aws.amazon.com/cloudfront/sla/ which says 25% back for any downtime greater than 1%. If you dismiss extreme cases (like 50% downtime getting a 25% credit), it's not bad.
Complex VM? New UI metaphors? ...W3C? Sure, that spec oughta be ready in 20 years or so....
It's a vendor job, not a committee spec. They can take the arrows in the back, keep what works, toss what doesn't. I personally think that modern JS+JIT implementations are nearly fast enough: it's become the DOM and its legacy behavior that's becoming the problem now.
The pain of developing web applications is largely dealing with cross-browser DOM and CSS issues. Swapping in a different scripting language may give you faster code execution, classic class-based inheritance and a packaging system, but as far as I can see it does nothing to help with layout issues, widget creation and other UI issues.
The pain of developing web apps today. Having faster execution makes a new class of web apps possible. (The other two things you mention are incidental next to having native code speed.) The longer you've been developing on the web, the worse your blinders are when it comes to what we could really do with native-code speeds.
OK, how exactly? Web apps now, and presumably in the future, are essentially engines focused on manipulating a DOM structure, which is then rendered (with any luck, correctly) by the browser, right? So unless you're talking about effectively ignoring the DOM and rendering applications solely via use of Canvas or SVG, what is the magic sauce that flavors this <i>new</i> class of web apps?
Image and video manipulation, alternate widget sets (which nowadays tend to require at least one of heavy image manipulation or OpenGL to really work correctly), 3D applications that actually work and aren't just static models rotating (because now you can actually afford to manipulate vertexes with some intelligence). Online games that use a combination of these techniques. Apps that grab video from your webcam and do something with it. (Mozilla demonstrated that: http://arstechnica.com/open-source/news/2009/02/mozilla-demo... ) The dream of doing distributed computing by just visiting a web page could come true (like for X@Home). An OS in your browser would become not-a-joke. Native VNC instead of flash. Native encryption run by the app instead of the browser, possibly permitting ssh-in-the-browser. Actual typesetting could be implemented (surprisingly computationally expensive) allowing actual competition in the Office space.
You rather proved my point, unfortunately. Web apps are not about "DOM manipulation". They are about delivering no-install applications over the internet. They have historically been about DOM manipulation because that was all they could afford to do performantly (and that only barely at times). As that changes, so does the web. We're getting a ways along this path anyhow (http://code.google.com/p/quake2-gwt-port/ ), just with increasingly fast JS and other tech, but it's only going to grow more.
We're talking (or at least I thought we were) about replacing the scripting language in a browser, which visually renders the structure and styling of a DOM. That's how browsers work.
Replacing the scripting language of a browser does not magically make the applications you describe possible, any more than it's possible to draw high quality vector graphics on a 5250 green screen. You're presuming all sorts of hardware-accelerated graphics, network connectivity, font manipulation and many other fundamental sorts of hardware manipulation that just isn't possible within the confines of (most of today's) web browser.
What you describe is indeed possible, e.g. Silverlight and Air, but putting a new scripting VM in today's browsers is not going to get you there. You're not talking about web applications, you're talking about a new class of web browser.
The web browser is the web application platform. Did you follow the link to Quake 2? It runs in browser you can download right now. And while the apps I mentioned do need some more support from the browser platform, that support is actually mostly there in some browsers. The only thing missing is native code speeds.
Are you keeping up with what web browsers are doing lately? They've turned a corner and are burning rubber now that we're increasingly less tied to Microsoft every day, and Microsoft's efforts to hold things back are no longer working. I'm hardly even hypothesizing, you can get demos of many of those things right now.
I'm not sure I get the point of this. Two of SQLites strong points (among many others) are:
(a) Short dependency list
(b) Platform independent filesystem storage
..doesn't this negate both of those for what seems like little gain?
BerkeleyDB doesn't have many dependencies, uses only plain files as storage, and is itself highly-portable open source (with a quasi-copyleft condition).
So you don't give up much to get the claimed benefits of this combination -- only the ability to use public-domain SQLite in proprietary distributed software.
I've used berkeley for hundreds of concurrent queries -- it is quite good in those situations, SQLlite is not -- its just not designed for those situations.
combining the two brings an easy interface that sqlite provide, and the concurrent performance that bdb has, is definitely providing value.
I do. I wrote a tool to help me understand what I could do on various storage tiers running as safely as possible. Here's one of my results from linode:
This is an awesome shootout! Be careful with BDB, its default tuning is geared more towards an embedded environment then a modern web-app. That's not to say it can't go fast!
I took a look at your BDB demo and saw that you weren't creating the DBs in an environment, which meant every operation was straight to disk, and also meant that you weren't running with write-ahead logs (which will give you durability). I didn't look too closely to see if the other databases had caching enabled or not (or what their defaults were).
On my macbook air, configuring with caching (and no logging) yielded this from your benchmark:
Air:kvtest jamie$ ./bdb-test
Running test ``test test'' PASS
Running test ``write test'' Ran 284669 operations in 5s (56933 ops/s)
PASS
Of course, page caching makes all the difference :-)
I used this for a major company's site-edit auditing system. (No, they didn't want HTML snapshots of each revision. It had to be a screenshot of the browser...)
It works really well. The only quirk is that it needs a fake X server (for font loading), but Xvfb works just fine for that.
I did not know about this project at the time I started with pdfcrowd. But anyway, I just took my existing pdf library and integrated it with WebKit which was not that hard as one could think.
First of all, I don't know how well wkhtmltopdf works, but there are many, many solutions to the HTML-to-PDF problem, and most of them suck. It's not surprising the creator decided to put together a library from scratch, it's the special sauce for his business.
Also, the "value add" comes from the fact that wkhtmltopdf is a library, and PDFcrowd is an API.
So much for investigative journalism.