Oracle versus PostgreSQL or possibly MySQL -- maybe. But don't think Oracle sells snake juice; their reporting tools, for instance, are top-notch and way beyond what the open source world can provide.
I've worked places where there is a rule that "The Corporate Database is Oracle and Oracle Only". Any mention of database _at_all_ will mean someone will ensure that The Corporate Database is used. It doesn't matter if Postgres or SQLite would be a cheaper/better choice.
I was more speaking to how easy it is to underestimate SQLite's flexibility and power and not realize how wide its use case actually is.
Of course for many use cases Oracle's tools are the best choice, but SQLite can handle quite a serious dataset in some use cases. A million records sounds like a lot. I can easily imagine crews used to working with Oracle using it for project far below its ideal capacity when they can buy licenses on a big corporate account.
So to me, the SQLite use case is often where the PRIMARY purpose of a database could be served nearly as well with fopen(), but where it's likely that there will be SECONDARY uses for which having a convenient universal access method would be useful.
I've even considered writing an external table plugin for SQLite to facilitate diagnostics and reporting of a complex ad-hoc file format I designed so I could get those benefits with a custom data structure; unfortunately, doing that seems somewhat complex in SQLite.
Well, that comment resonates with me. I feel the same but can't support it well with arguments. Care to share what you would use instead? F.ex. to replace fopen() with like the OP describes.
Of course, this was back in 2003 or so, and that project never turned into anything more than a fancy wrapper over SQLite and Lua.
Still, SQLite == awesome, when it is appropriate.
Loading them back from the tools/game took a long time, for that reason they were put in small SQLite db - which is a little bit more complex than this "CREATE TABLE files (key, value)"
This sped up us significantly. We might be using this idea for more and more small files lying around.
For any small/medium site it's great.
At the time, I attributed it to SQLite, but now I'm wondering if it was Rails or (more likely) my inexperience at optimizing performance of the app at that time.
It was about three years ago, though, so I am sure performance in SQLite (and Rails, too, for that matter) has also likely improved a lot during that time.
Thanks for the info.
As long as you are the only user, performance should be constant regardless of file size (minus fragmentation issues)
Implementing a B-tree is not easy but no black magic either (I did it for my diploma theses). Same for a hash join. And these two things are really all you need to have reasonable performance for simple selects and joins on tables of almost any size.
But if your queries get more complex, the query execution plan starts to make a huge difference - and a query optimizer is black magic, as anyone who's wrestled with Oracle's can attest. I doubt SQLite can compete in that area.
I've always been on the "sequel" side of pronouncing SQL (hey, it's no worse than "scuzzy" for SCSI), which morphed SQLite into "Sequelite". I never realized how bizarre that sounded (almost more like a material than a database).
This isn't necessarily authoritative or well researched, but it appears that S-Q-L-Lite is correct: http://blog.cleverly.com/permalinks/247.html
Language is fascinating!
I'm Australian and always spell out acronyms when speaking them - saying something like 'sequel' or 'scuzzy' just sounds weird to me.
Consistency. Your enterprise depends on terabytes of hyper valuable information (Wallmart with their sales data), can you guarantee that you won't end up with corruption issues? Or that the next version will work with your system too?
That said as long as you make less than 20 mil/year, Oracle isn't likely to be the best solution.
Just a quick question: which engine do you use for your MySQL system?
When the ERP application you are installing has in it's specs
- Supported database: Oracle
I am sure a crack Postgres guy can bash and file that sucker to work. But the vendor will not support the result, and some places, some situations, that really counts for a great deal.
Oracle isn't selling databases. They're selling database support. Similar to how IBM operates - they make good stuff, but you're really not paying for the hardware. If you were, they'd be outrageously expensive compared to something built more simply - see Backblaze, for an example: http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-h... .
(I too wonder what technical marvels it can perform which justify the cost).
But occasionally you gotta spend money to make money.
Instead of trying to answer questions like "what should our file format look like?" It seems more interesting and valuable to ask first, "what storage is most appropriate", the answer may be REST, database, file, /dev/null, or any abstraction over one or more of these. You may then find that questions about file format, or structure simply disappear, it is the role of the storage mechanism chosen to efficently store and recover the data, use the hard work that the team that built that solution invested, and apply your own hard work to solving your own problems.
With cloud and mobile being added to the set of common target platforms (however ill-defined), I hear/read people asking "how do I read/write a file" increasingly often, when on those platforms (and many other) the concept of a file (at the application layer) may not be useful at all...
Windows CIFS shares are fine and always have been AFAIK.
The choice of fopen() and Oracle was meant to convey simplicity: SQLite solves problems on the simple end of the difficulty spectrum -- problems you would normally solve with fopen(). It is not for problems at the complex end, like Oracle. It's a simplified expression; reading it literally is foolish.