SQLite works great as the database engine for most low to medium traffic websites (which is to say, most websites). The amount of web traffic that SQLite can handle depends on how heavily the website uses its database.
I am curious to know how many people here solely use SQLite to power the back-end of their web application(s), especially when the page states, "SQLite does not compete with client/server databases."
(Or is the page referring to content-management-system-type websites?)
The main limitation is number of clients here. If you have a RESTful API, some ETL loaders, and several webservers running on EC2 all talking to your database, then you need a real client-server architecture.
However, if you're running your website on Apache, on a single webserver, then there's really only ONE client for your database, in which case SQLite works great, even if there's a heck of a lot of load.
SQLite is fundamentally a C library talking to a binary file format, so it's orders of magnitude faster than making a network connection to a client and then issuing SQL.
I've run medium-sized websites on an MVC framework talking to SQLite MANY times, and it works great.
On work, my area has a conflicting relationship with IT, thus we are often required to use unusual setups...
Well, we've tried distributing applications that needed to share a DB backend. We tried a MS Access backend first, but it stopped working after about 5 people were using it. Then we migrated to SQLite, it handled well up to near 50 people, then its over-restrictive locking become a problem. Luckly by that time we got hold of an Postgres server.
Low to medium traffic websites don't usualy have much more than 10 working threads, so yes, I've used a lousely similar setup, and it worked. It'll depend on how much time is spent on DB access vs. local processing, and how fast the DB is accessed by those servers, so YMMV.
It's worth noting that you can split your data among multiple database files (effectively on-disk sharding) to alleviate contention... in-memory record caching and mostly read scenarios will also reduce contention.
There's a LOT you can do with SQLite... not to mention that with simple read caching and reduced writes you can get a lot of performance out of SQLite. Highly interactive web applications, I wouldn't expect to handle more than 50 users on a single system backed by SQLite, as you mention... with SSD, you may get a couple more.
If you aren't having to do many writes, it will fly for thousands of users... when you have to do a lot of writes, then it will slow to a crawl. I've seen distributed Access based database apps that handled several hundred simultaneous users before.
Yeah, I use it for fair-traffic CMS sites and even webapps. The trick is exactly the same as any other RDBMS: cache everything as much as possible.
Django makes this really easy (it's the default). It's a shame other projects aren't on flexible ORMs. I'd love to be able to deploy WordPress and Drupal sites without dicking around creating databases.
How is this any different from any other web application?
This is a pretty good explanation, and I'd also add that SPAs move away from the traditional server-structure URL path mentality, which wasn't very user friendly and never really needed. Now URLs are kept mostly for sharing or bookmarking.
i.e. www.domain.com/user/profile/update.html can just be handled in index.html
By "traditional server-structure URL path mentality", do you mean the mapping of URLs directly to static files on the filesystem? If so, I'd argue that's a separate issue from SPA vs. MPA. After all, most MPA frameworks, like Django and Rails, break that equivalence. URLs map to controllers, rather than particular templates.
I meant the mentality, which started with URL static file mapping and continued with web routing. Not all URL path routing is bad, but with SPAs you get more focus on what the customer uses URLs for (i.e. URL paths for sharing a picture, etc.) rather than parsing page functionality out across different paths.
It's my opinion this URL simplicity makes a better user experience, but that's just my opinion.
Traditionally, web applications used sessions to maintain state while the user navigated the interface, with full page reloads for every screen (typically visible as a blank white page as the next screen loads from scratch).
Clicks on page elements that represent some user interaction with the app should usually not trigger a whole-page reload, but instead fire some On-Click action resulting in (maybe) some data being sent to the server and (maybe) some new content being added or updated on the screen without changing the URL of the loaded page.
The client might long-poll or maintain a websocket so that new information from the server can be received without a user action. These are the types of patterns you don't normally see in "any other web app" that differentiate an "Web 2.0" SPA from "old-school Web 1.0" applications.
Push solutions like long-polling and websockets are a separate concern. It's perfectly possible to use those technologies in a web app that's primarily still multi-page, such as a stock ticker on an otherwise static page.
I would draw the distinction in navigation. Are links primarily triggering built-in browser navigation (MPA) or are they handled by long-running in-browser code, backed by asynchronous requests for additional server data (SPA)?
SPA looks more like a mobile app. As you click the links the page doesn't refresh, content just gets replaced dynamically.
Traditional web page honors web standards a bit more and looks like the traditional structure of pages linked with hyperlinks. Want to read about the product? Click a link. Want to refresh the current price? Click refresh button.
I have been developing with MongoDB for about the past year. (Briefly, I have developed, and continue to support, an ASP.NET MVC C# Web Application that's used by a few thousand people daily. For my personal projects that use MongoDB, I develop with PHP and Python.)
Overall, I enjoy working with MongoDB, because it (generally) maps directly to your object - there is no need for an additional layer (such as an Object Relational Mapper (ORM)).
However, you have to be more careful with your data structure. For example, having sub-arrays in sub-arrays is probably not a good idea.
I will be happy to share more, so feel free to look up my profile.
I ran a website and operations for a company that did over 1mm a year in revenue. Everything was done off of a single machine that was the webserver, database server and media server. Had 10k customers all of whom were active daily. Postgres and Python/Django and I never saw load averages much over 10%. Unless you have a substantial fraction of a million daily users you probably don't need mongo.
If we don't have a working build by 3pm tomorrow, we're all fired...
This is a bothersome and worrisome comment that probably has been repeated (in some variation) too often. How is this the fault of the developer(s)?
I appreciate that founders need to fulfill their promises, especially when having the backing of investors, but these type of statements seem to indicate a failure in communication and planning (more than execution).
You make an excellent point. I tend to think of it as "What would you do if you knew you were going to be dead in 6 months" but that tends to skew toward the 'fun' side of things rather than the 'meaningful' side.