Hacker News new | past | comments | ask | show | jobs | submit login
"SQLite is not designed to replace Oracle. It is designed to replace fopen()." (sqlite.org)
249 points by ma2rten on July 29, 2011 | hide | past | web | favorite | 57 comments



Even though SQLite isn't designed to replace Oracle, I'm certain there's plenty of companies that have purchased expensive Oracle licenses for projects where SQLite would have worked perfectly well. I've often used it for web-based custom CMS's of one kind or another where many people would have used MySQL and the client would never know the difference. I'm sure many people have used Oracle for web-based internal tools like this with less than a million rows of data.


Regardless of how much fun lamenting the assumed technical incompetence of enterprise software development can be, I strongly doubt that this has ever happened. Oracle is very expensive, and SQLite is very limited outside its clearly delineated use cases.

Oracle versus PostgreSQL or possibly MySQL -- maybe. But don't think Oracle sells snake juice; their reporting tools, for instance, are top-notch and way beyond what the open source world can provide.


I disagree with you completely.

I've worked places where there is a rule that "The Corporate Database is Oracle and Oracle Only". Any mention of database _at_all_ will mean someone will ensure that The Corporate Database is used. It doesn't matter if Postgres or SQLite would be a cheaper/better choice.


I'm seeing this too ; the choice is company-wide, the licenses are bought company wide. And yes it would save money in many departmental apps; no-one seems to care.


Believe me, I wasn't trying to have fun at someone's expense. I think factors other than incompetence can account for this, like just simple ignorance.

I was more speaking to how easy it is to underestimate SQLite's flexibility and power and not realize how wide its use case actually is.

Of course for many use cases Oracle's tools are the best choice, but SQLite can handle quite a serious dataset in some use cases. A million records sounds like a lot. I can easily imagine crews used to working with Oracle using it for project far below its ideal capacity when they can buy licenses on a big corporate account.


I worked with a thirty-million-row dataset for my thesis (after MongoDB famously choked and ate half my data more than once) with SQLite and it didn't bat an eyelid. It was fast as hell, too.


Size isn't the problem with SQLite databases. It's that it's unbearably slow with even fairly trivial joins and most of the other operations where it'd make sense to actually have an SQL database. SQLite is fine for tabular data, but then, so are a bazillion other things. Basically it's the perfect SQL database for the times that you shouldn't be using an SQL database.


I don't quite agree with that. One advantage I see for using SQLite is that you CAN use it as an SQL database for administrative purposes: You can quickly import/export the data from or into another SQL database, you can quickly generate an ad hoc report, etc.

So to me, the SQLite use case is often where the PRIMARY purpose of a database could be served nearly as well with fopen(), but where it's likely that there will be SECONDARY uses for which having a convenient universal access method would be useful.

I've even considered writing an external table plugin for SQLite to facilitate diagnostics and reporting of a complex ad-hoc file format I designed so I could get those benefits with a custom data structure; unfortunately, doing that seems somewhat complex in SQLite.


> Basically it's the perfect SQL database for the times that you shouldn't be using an SQL database.

Well, that comment resonates with me. I feel the same but can't support it well with arguments. Care to share what you would use instead? F.ex. to replace fopen() with like the OP describes.


I first used SQLite because I thought (in C++) "Man, instead of writing crazy datastructures, I wish I could just use SQL on a spot in memory" SQLite came up in my Google search, and ta da!

Of course, this was back in 2003 or so, and that project never turned into anything more than a fancy wrapper over SQLite and Lua.

Still, SQLite == awesome, when it is appropriate.


SQlite is a gem; when we develop web sites for clients, we first try to use sqlite, and if not we use MySQL and PostGres; we only use mssql, oracle when the customer has already a legacy database. What about you?


We use SQLite exactly for that. At some point we were generating +50000 shader (.hlsl) files for 3 different platforms (so much due to various techsets and techniques).

Loading them back from the tools/game took a long time, for that reason they were put in small SQLite db - which is a little bit more complex than this "CREATE TABLE files (key, value)"

This sped up us significantly. We might be using this idea for more and more small files lying around.


I use SQLite exclusively in my projects, I love it. Easy to use, easy to backup (1 file), doesn't rely on a SQL database running, no security concern with username/password for the database engine (altho make sure your database file isn't web accessible)..

For any small/medium site it's great.


Agreed. And the database can grow surprisingly large (multi GB with many millions of rows) and it still performs quite well.


This surprises me. Though I've used it with great success for several small projects, my only experience with SQLite on anything remotely large was a Rails app with maybe a million records in the DB. Over time, performance became very slow and switching to MySQL made a world of difference.

At the time, I attributed it to SQLite, but now I'm wondering if it was Rails or (more likely) my inexperience at optimizing performance of the app at that time.

It was about three years ago, though, so I am sure performance in SQLite (and Rails, too, for that matter) has also likely improved a lot during that time.


In SQLite, autocommit is on, so if you weren't explicitly setting your transactions, you might have been commit after each update. Also, the standard disk cache is 2MB, so if you think your DB should have more memory than that, up it with default_cache_size.


Ah, I wasn't, so that combined with pilif's comments regarding the file being locked for writes would make a lot of sense as to why things were slowing down.

Thanks for the info.


Write operations to a SQLite file lock the whole file. It's possible that your rails app had enough users doing enough writes so that the processes had to constantly wait.

As long as you are the only user, performance should be constant regardless of file size (minus fragmentation issues)


It depends a lot on the complexity of queries.

Implementing a B-tree is not easy but no black magic either (I did it for my diploma theses). Same for a hash join. And these two things are really all you need to have reasonable performance for simple selects and joins on tables of almost any size.

But if your queries get more complex, the query execution plan starts to make a huge difference - and a query optimizer is black magic, as anyone who's wrestled with Oracle's can attest. I doubt SQLite can compete in that area.


Wow, more projects should have a page similar to this one. Upvoted because it’s good to keep in mind when thinking of sqlite and when documenting your own project.


By the use of "an", I learned that I have been pronouncing SQLite wrong. The maintainers clearly prefer the spelling out of SQL in the title.

I've always been on the "sequel" side of pronouncing SQL (hey, it's no worse than "scuzzy" for SCSI), which morphed SQLite into "Sequelite". I never realized how bizarre that sounded (almost more like a material than a database).


Just in case anyone cares because I did the research on this a few weeks ago (including watching its creator say it numerous times) it's S-Q-L-Lite. Yeah, that seems like a duplication of the L but there you have it.


I've always pronounced it that way in my head, which has lead to a few typos searching for "SQLlite" or coding up a "SQLliteConnection". If I'd just learned to pronounce it wrong I would have been typing it right all along.


I and most people I know call it "sequelite". That works for me...


Esquelite sounds pretty good too...


So does that become S-Q-L-ite or S-Q-Lite, you think?


Good point. I was going for SQL-Lite. But being wrong (and hearing enough people say "lie-nux" [and to be clear, I am not mocking ... we read things and form our own pronunciation because we often don't hear them said] ...) I decided to google it.

This isn't necessarily authoritative or well researched, but it appears that S-Q-L-Lite is correct: http://blog.cleverly.com/permalinks/247.html


I say "S-Q-Lite".


As an aside, to follow Linus' pronunciation, it is Lee-nucks, from the Finnish way of pronouncing "Linus." Americans are used to a different pronunciation of the same name and thus generally adopted a different way of saying "Linux."

Language is fascinating!


Linus is a native speaker of Swedish, not Finnish.


The pronunciation of 'Linus' is pretty much the same in Swedish and Finnish, though.


Surely the U differs. To Sweden-Swedish ears at least, Linus in Finnish sounds like Linos.


The Internets confirms this. I didn't know that; I learned something today!


I say S-Q-L-ite.


From what I can understand, that's actually a difference between Commonwealth and American English.

I'm Australian and always spell out acronyms when speaking them - saying something like 'sequel' or 'scuzzy' just sounds weird to me.


I suppose it depends on your definition of "simplicity". My go-to simplified data storage container is Berkeley DB. (Little did I realize Oracle recently shipped a Berkeley DB with SQLite inside it... heh)


Can anyone say when Oracle is justified? I have yet to encounter a problem the postgres or mysql didn't support but alas I have not worked on everything.


Support. You can properly find some solution to your issue with postgres online. Oracle can send an Oracle certified engineer who can solve your problem.

Consistency. Your enterprise depends on terabytes of hyper valuable information (Wallmart with their sales data), can you guarantee that you won't end up with corruption issues? Or that the next version will work with your system too?

That said as long as you make less than 20 mil/year, Oracle isn't likely to be the best solution.

Just a quick question: which engine do you use for your MySQL system?


InnoDB. I understand your support viewpoint but both of those dbs are opensource thereby leaving support open. From what I understand when the support comes into play is when you can sue the other party for not providing the service expected but there is company support for mysql and postgres so that still leaves me in the dark and I have never experienced corruption issues with either. Though my db experience is limited and I don't claim to be an expert.


I had experienced db corruption with MySQL years ago, when the disk became full. To be fair, I think it was MyISAM.


Can anyone say when Oracle is justified?

When the ERP application you are installing has in it's specs

- Supported database: Oracle

I am sure a crack Postgres guy can bash and file that sucker to work. But the vendor will not support the result, and some places, some situations, that really counts for a great deal.


Mostly echoing tomjen3 in this, but:

Oracle isn't selling databases. They're selling database support. Similar to how IBM operates - they make good stuff, but you're really not paying for the hardware. If you were, they'd be outrageously expensive compared to something built more simply - see Backblaze, for an example: http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-h... .


When you need "nobody ever got fired for buying IBM" levels of nontechnical legitimacy?

(I too wonder what technical marvels it can perform which justify the cost).


When you need multiple spatially distributed active/active write nodes (multiple masters...) you pretty much need Oracle RAC. When confronted with the cost, you end up designing something where writes are pushed to a caching layer, which deals with an active/passive setup, with enough buffer to switch passive to active if something goes wrong.


DB2 and Teradata boast comparable degrees of horizontal, multi-site scalability.


And DB2 cost pretty much the same as Oracle... :p


Less, usually, depending on the platform. IBM mostly uses DB2 to sell its fancier hardware.

But occasionally you gotta spend money to make money.


I love the example that this provides with respect to the conflation of "Storage" with "FileSystem", while data may indeed be stored on a filesystem somewhere, the habitual thinking that appends " to file" to the phrase "I need to save these data" seems unintentionally limiting to me.

Instead of trying to answer questions like "what should our file format look like?" It seems more interesting and valuable to ask first, "what storage is most appropriate", the answer may be REST, database, file, /dev/null, or any abstraction over one or more of these. You may then find that questions about file format, or structure simply disappear, it is the role of the storage mechanism chosen to efficently store and recover the data, use the hard work that the team that built that solution invested, and apply your own hard work to solving your own problems.

With cloud and mobile being added to the set of common target platforms (however ill-defined), I hear/read people asking "how do I read/write a file" increasingly often, when on those platforms (and many other) the concept of a file (at the application layer) may not be useful at all...


Can anyone provide more info when the site says "...the file locking logic of many network filesystems implementation contains bugs (on both Unix and Windows)"?


They generally relate to NFS - e.g., Linux fcntl() used to lock NFS files only locally. http://en.wikipedia.org/wiki/File_locking#Problems

Windows CIFS shares are fine and always have been AFAIK.


Laughed a lot! :D


... because the only use for file I/O in applications is structured tables of data? What about audio/video, documents, graphs, log-structured data, object caches, unstructured flat files, etc? Don't get me wrong, SQLite is awesome, but this is a classic example of the relational database world's pervasive attitude that they are the superior solution for any data storage problem.


Wow. Not only did you manage to completely miss the point of that statement, but you cast it up as a hasty strawman to boot. I'm quite impressed.

The choice of fopen() and Oracle was meant to convey simplicity: SQLite solves problems on the simple end of the difficulty spectrum -- problems you would normally solve with fopen(). It is not for problems at the complex end, like Oracle. It's a simplified expression; reading it literally is foolish.


Damn, you are so right bro.


replace some instances of fopen(), not every instance of fopen()


Acorn (image editor) uses SQLite as file format.


It's a deliberately simplified view of the problem.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: