Hacker Newsnew | comments | ask | jobs | submitlogin
Why Use Postgres (craigkerstiens.com)
258 points by craigkerstiens 724 days ago | comments


famousactress 724 days ago | link

Left out the fact that Schema-changing operations are transactional (big win over MySQL).

[Edit: In migrations, specifically. Also! No mention of the MySQL dual-license JDBC driver malarky... Not that it's a success of Postgres' as much as it's a failure of MySQL's... ]

I'd never heard of Window Functions (not an Oracle user, and been on MySQL for a few years... left Postgres for replication and haven't made it back quite yet).. but I desperately want them. The idea of effectively aggregating over groups in a group by (which is how I understand these Window doohickeys) is something I come up on a few times a year and get really frustrated by.

I think the only killer feature I'm left wanting from some RDBMS at this point is managed-sequences... The idea that I have a related list of things that I want to keep a sequence column for, and would like the database to understand the concept and help me maintain it.

-----

davyjones 724 days ago | link

> I'd never heard of Window Functions

Go ahead and give it a shot. Once you get used to it, you cannot live without it.

Another killer feature in my opinion is the recursive query: http://www.postgresql.org/docs/current/static/queries-with.h...

-----

jaggederest 724 days ago | link

And, to go along with the recursive queries, the GEQO - genetic query optimizer, for when your queries are NP-complete to even optimize.

-----

jiggy2011 723 days ago | link

I wish I had know of these sooner, with MySQL I was jumping through all kinds of hoops to organize hierarchical data in my DB without generating insane amounts of queries.

-----

empthought 724 days ago | link

What's the trouble with PostgreSQL's sequence support? (http://www.postgresql.org/docs/9.1/interactive/sql-createseq...)

-----

famousactress 724 days ago | link

You got me really excited, but I don't think this is what I want. Imagine a table "Chapter" with a Foreign Key "Book". Not uncommon would be to have a 'sequence' or 'rank' column in the Chapter table defining it's order within a book. Sequences shouldn't overlap (accomplishable with a unique constraint on sequence and book_id), but they also shouldn't have gaps, and it'd be nice if inserting a Chapter at a sequence had predictable behavior (like.. it incremented all chapters in that book with sequence >= it's).

I find it a pretty typical and profound source of frustration, and since the whole idea of RDBMS is to model relationships.. and relationships so often have integral sequence/rank.. it's always seemed to me like something that ought to've be considered 'in scope' during the design of SQL.

-----

empthought 724 days ago | link

OK, that's definitely a matter for a custom trigger on the table.

Another option would be to keep the chapter order in an array column on the Book table, since arguably the arrangement of chapters belongs to the book rather than the chapter.

PostgreSQL array documentation is here: http://www.postgresql.org/docs/9.1/static/arrays.html

-----

famousactress 724 days ago | link

arguably the arrangement of chapters belongs to the book rather than the chapter

That's a really excellent point, and array columns look like a really cool solution! I suppose the only bummer is that it introduces a way to get things out of sync (I could delete a Chapter row, but forget to remove it from the array in Book), but in general this is a much better approach than the status-quo solution.

-----

empthought 724 days ago | link

I think you should be able to write a delete rule on Chapter that will ensure that no array in Book contains that chapter, whether through cascading the delete or through raising an error.

-----

ams6110 724 days ago | link

If you're ever thinking of writing a trigger for any purpose, think again. And again. They are used improperly far more often than not. Things should not happen as "magic" side effects in a database, which is what triggers do. Generally better to be explicit with code in a function or stored procedure.

-----

empthought 724 days ago | link

In general I disagree, but in this specific case you are right that a procedural interface to manipulating books/chapters is preferable.

Essentially if your API is single-table DML, then triggers are preferable. If multiple tables need to be manipulated at the same time to complete the full operation, then there should be a procedural API and triggers are likely unnecessary.

-----

imperialWicket 724 days ago | link

You should investigate exclusion constraints. Another scenario where they come in handy is a calendar interface where you need to ensure no overlapping per record per time.

-----

vog 724 days ago | link

Indeed, exclusion constrains are really neat! FTR, here's their description in the PostgreSQL manual:

http://www.postgresql.org/docs/9.1/static/sql-createtable.ht...

-----

famousactress 724 days ago | link

Oh wow. These are pretty wacky/interesting. Also, sign of a well designed API: I think I gleaned more understanding from reading an actual EXCLUDE constraint definition than I did from reading the description of what it does in the docs.

-----

fusiongyro 723 days ago | link

The problem is that what you want can't be had within an MVCC framework. To get what you want, you really need global, thread-safe mutable data. MVCC instead gives you multiple parallel universes that spring into existence and then disappear, possibly making indelible changes on the next parallel universes to be created in the process.

It boils down to two problems. You can't be sure you're the only transaction in progress, and you also can't "take back" a sequence number. If you could guarantee you were the only transaction, you could just use COUNT. But since you can't, you could have two transactions in progress that both get the same COUNT result and you wind up with two "Chapter 5"s. You can put a UNIQUE constraint on that table, and then one of your transactions fails that constraint. That way you retain integrity and you get (at least for a while) sequential numbering, but there is no in-database way to ensure that a sequence of numbers has no gaps. This is because to do so would be expensive but also not really relate to data integrity.

The purpose of PostgreSQL sequences is to reliably generate surrogate keys. That means every transaction in progress gets a separate ID from that sequence. They all have to be unique because they all might commit. If they rollback instead, it doesn't matter; those become gaps, but nobody is failing the UNIQUE constraint.

What are you going to do to seal the gap when you have two transactions in progress on the same sequence and one of them rolls back? Are you going to pause the other transaction and propagate some kind of decrement to the other transactions? What if I inserted that ID into another table or called a globally-visible stored procedure (sent an email, for example)? I'm not certain it would even be possible, but even if it were, it would clearly be extremely expensive, and therefore, probably not a feature you want on by default.

So I hope that explains why this is not "in scope." :)

-----

tzs 724 days ago | link

> I think the only killer feature I'm left wanting from some RDBMS at this point is managed-sequences...

I would like some RDBMS to give sharable transactions. What I mean by that is something like this.

1. When I begin a transaction, I can assign it a name.

2. If multiple connections to the database are in transactions with the same name, they can see the changes made by the others, as if none of them were in a transaction. Connections not party to the shared transaction do not see the changes until all of the connections in the shared transaction commit.

3. The connections in the shared transaction can do unnamed transactions, which isolate them from the others within the shared transaction.

The idea here is that sometimes I have some operation that I would like to do atomically, but that involves multiple processes acting on the database. Process A wants to make some changes, then pass responsibility on to process B (which may be on a separate machine) to make more changes, and then process C finishes up, and if any of them encounter an error I want to rollback to the state before A started.

-----

famousactress 724 days ago | link

Interesting. Without knowing what kind of system you're talking about I'm suspicious that I'd use this. I certainly have distributed problems like this where a logical concept is broken into a number of steps, and while I want to distribute the steps.. the effects of them to the entire system might not be realized until all of them are complete.

I generally wouldn't defer that responsibility to the database though. I'd more often handle this as multiple database transactions that advance a state machine, at the end of which consequence-inducing-events are fired off.

The idea that process A tickles the database into some state expected to be understood (and only seen by) process B seems like too flimsy a contract to me.

-----

_sh 724 days ago | link

I get your point, but there is value in delegating to the database--how do I roll back an external state machine?

-----

bad_user 723 days ago | link

See the command pattern: http://en.wikipedia.org/wiki/Command_pattern

Basically you encapsulate whatever you need to do with the database in objects (logic + parameters). And you describe for these objects not only the logic for updating the database (forward), but also for reverting the database to its previous state (rollback).

-----

_sh 721 days ago | link

Yep, I know the command pattern--it requires knowledge of all state up front and breaks down in the presence of distributed state. It makes statements like 'update all records where value x is y' difficult. You need to gather all such records first, possibly guarding their current state with a timestamp/vector clock (we are talking distributed systems after all), make the change encapsulating the current state in the command pattern and commit. If you have to extend this model to multiple queries, you'll be rolling your own transaction protocol lest you expose partial state to other queries in transit.

Or you could delegate all this to the database, which already has its own transaction protocol.

-----

kanak 724 days ago | link

At an initial glance, I think you're describing nested transactions.

-----

j_s 723 days ago | link

I believe this is implemented application-side for SQL Server by the Microsoft Distributed Transaction Coordinator, which allows enlisting in existing transactions.

Some explanation of how it works: http://stackoverflow.com/questions/494550/how-does-transacti...

Usage to 'forget' automated test database changes: http://amnesia.codeplex.com

-----

andyzweb 724 days ago | link

perhaps prepared-transactions might have some of the functionality you are looking for http://www.postgresql.org/docs/9.1/static/sql-prepare-transa...

-----

vyrotek 724 days ago | link

I think the only killer feature I'm left wanting from some RDBMS at this point is managed-sequences...

Excuse me if I didn't completely understand what you wanted but it seemed similar to SQL Server's sequence: http://msdn.microsoft.com/en-us/library/ff878091.aspx

-----

famousactress 724 days ago | link

Thanks, but nope.. See my comment to @empthought below, who pointed at the same feature in Postgres. I was sloppy by using the term 'sequence' which already has some meaning in RDBMS apparently.

-----

gouranga 724 days ago | link

I really disagree with this. The more features you use from your DBMS vendor, the more you shoot yourself in the following departments:

- scalability. Logic and processing in the server is bad as it means you can only scale up and not scale out. Scale up is damn expensive. When you need that 64 core 96Gb machine to run it all on with 6 replicas will see what I mean.

- complexity. The DBMS is a black box which sucks up tight coupling almost instantly. Coupling is bad for maintenance and scalability. SQL always ends up with heavy coupling.

- lock in. Those features only work with postgres so you're stuck with it forever or induce a lot of cost to move away when a better solution comes along.

- schema. Maintaining schema and dependencies is hell on all but a small system.

These facts come from 20 years of using databases.

Stuff that will save you: Use your DBMS as a simple flat store akin to a document store. Use a CQRS based architecture and encapsulate ALL of your logic in code. If you have to go at least partially relational for god's sake use an ORM that supports several databases well. Make a provider independent app level backup/restore facility such as the one with JIRA or TeamCity. NEVER hire a DBA - they exist only to create walled gardens to protect their existence.

My current gig has cocked up on all areas and it's costing them over 40% of their turnover keeping it alive.

Happy databasing :)

-----

tensor 724 days ago | link

Throwing out decades of work on storing data just because it's a hard problem won't help you solve the problem. All you are advocating is that instead of using an existing well tested solution, you should write your own and deal with all the assorted costs of redoing that work, and probably poorly.

Similarly, using an ORM doesn't help you. All that does is hide the details from you. It still uses SQL underneath, but limits what you can do with it.

Let's talk schemas. You have one whether your database understands it or not. Trying to pretend you don't have one doesn't make managing your schema any easier. It doesn't make building indicies on fields any easier.

These are hard problem and pretending they don't exist by rolling your own implementation or hiding them under an ORM won't make them go away.

-----

gouranga 724 days ago | link

Decades of work is pretty wrong to be honest. They approach the wrong end of the problem. They never consider efferent coupling between database objects which is the absolute killer in anything but simple projects regardless of what is promoted or discussed. SQL makes that an order of magnitude worse.

It's also not about it being a hard problem (which it incidentally is but that's another story). It's about controlling the scale of the problem as time goes on and sufficiently abstracting it without adding complexity. That's not hiding it - that's making it a non issue. Permanently.

I am only advocating using a minimal subset of the functionality to aid migration in the future between engines, platforms, versions and storage paradigms.

An ORM (in our case NHibernate) is a clean abstraction over the top which allows us to be completely mobile between vendors and versions. There is no NIH in it.

The world changes pretty quick. Those who adapt quickly stay around.

-----

bunderbunder 724 days ago | link

An ORM is perhaps a clean abstraction. But it is also a very simplistic abstraction; it generally doesn't provide the greatest facilities for accessing all sorts of useful features that the DBMS provides.

An ORM does not replace proper database design. And letting the ORM do all your work for you w/r/t to mobility means accepting that you will not have good access to anything but the least common denominator among databases. Which is approximately the SQL-92 standard.

Another choice, which doesn't involve cutting off one's nose to spite one's face, is to couple the ORM to a public interface which is defined in terms of a specific set of views and stored procedures. Then you can very easily write schema which conforms to this interface whenever you need to switch to a new DBMS. The new schema is then free to take advantage of whatever features that DBMS provides in order to achieve good performance.

And the potential performance gains can be huge. There are plenty of examples out there of replacing 1-hour basic SQL + in-app processing procedures with 100ms common table expressions and the like. If you've got someone who knows what they're doing in charge of it, the latter solution will often take quite a bit less time to write and debug, too.

-----

ak39 723 days ago | link

> If you've got someone who knows what they're > doing in charge of it, the latter solution will often > take quite a bit less time to write and debug, too."

Ah, therein lies the rub. Spot on.

My obversations:

The last decade has seen a gradual weening off of reliance on the database stack for processing any form of "logic". This has been a evolutionary disaster. When companies like Sun pushed out application platforms such as Java, Microsoft followed suite to remain in the game. That's where the action was and that's where their revenues came from for the good part of 1998-2008 (still does). As a result, the newcomer programmers find it unfathomable to position any amount of processing in the hands of RDBMSs. These newlings choke at the idea of stored procedures or triggers - citing that these constructs are a step in the WRONG direction. Set-based SQL operations remain an anathema for a large majority of the new programming stock.

This is a pity because the side-effect of this attention diversion has been poorer data modeling skills and gross underutilisation of new and often improved RDBMS concepts & tools.

In the end, these new-school graduates of denormalisers and NoSQLers will have to contend with the one and only critical challenge that RDBMSs set out to fix: data correctness.

Performance & scalability mean shyte if you can't guarantee correctness. Bring to the table ACID transactions and MVCC with your new concepts of scalability and performance - then we talk ... kinda thing.

AK

-----

dansingerman 723 days ago | link

"Performance & scalability mean shyte if you can't guarantee correctness" - well that depends...

I take it you are aware of Brewer's CAP theorem? (http://www.julianbrowne.com/article/viewer/brewers-cap-theor...) If not, you should be.

-----

ak39 723 days ago | link

Quote from link: "That you can only guarantee two of Consistency, Availability and Partition Tolerance is real and evidenced by the most successful websites on the planet. If it works for them I see no reason why the same trade-offs shouldn't be considered in everyday design in corporate environments."

Whoah. This is a type of sociopathic reasoning applied to data modeling. The premise is based on accepting compromise knowing fully the consequences of data corruption and yet failing to address the actual rider: data shall not be left in an inconsistent state. An argument turned on its head. Much like asking the question “is it ethical to be deliberately inaccurate on your IRS returns” is answered with “this is not a question about ethics”. Is it not a gigantic assumption to claim:

1. The compromise has “worked” for others (“most successful websites on the planet” [sic]) without providing any real evidence supporting this claim? What does “worked” mean? Is Twitter “ok” with providing suggestions for Followers which contain people already in the user’s follow list? What were the consequences of data inconsistencies in matters related to financial transactions? Was every requirement able to simulate a catch-all “place a back-order” solution? These are real questions that have to be answered before evangelising the compromise.

2. If it works for others (meaning: if you are willing to accept compromises about data consistency) this compromise ought to be brought to your garden-variety corporate environment? Gasp! As it is, volumes or scalability issues aside, corporate environments suffer from data inconsistencies, poor or no transactional control, ineffective referential integrity etc. There’s very little positive impact of encouraging this habit as an overt practise under the false generalised promise of “you will eventually need to scale to millions”.

They can keep their CAP theorem or BASE principles for the likes of social toys like Twitter and Facebook. When credit card companies and banks start accepting compromises on their transactions, I’ll start paying due attention.

-----

fdr 723 days ago | link

It's seldom necessary and often impractical to have full-blown shared-everything everywhere. There is a yawning gap going from simple key-value to the relatively rich constraints and transaction processing SQL standardizes that is probably not necessary, and I think will be addressed with more programming (rich constraints and data models can exist over a smaller amount of data without trouble). Consider Cassanda column families vs. simple binary-string-to-binary-string systems, and it's already a significant advancement. I think that's the direction growth is going to take.

Most of these large -- and particularly, global -- systems you speak of are eventually consistent also; they use the relational database as a potent backend member of a distributed system, able to check and deal with a great many constraint problems that can be localized.

-----

gouranga 723 days ago | link

It's even harder to VERIFY correctness inside the DBMS with respect to keeping it managable and debuggable. That's why we pull it out into code behind an ORM. We can run a fully automated test suite in 30 minutes which will verify that the application will function as expected from the domain model.

Believe me, we still need ACID/MVCC and by no means do I want to break the consistency part of CAP theorem if possible.

Also set operations break down under the following conditions:

1. when you have to aggregate a LOT of data together they don't scale. If it scaled well we wouldn't need OLAP solutions and could do everything with live, perfectly normalised data. That is just not the way it is.

2. They really don't work for temporal or complex non-uniformly structured data at all well.

-----

tensor 723 days ago | link

It is not harder to verify correctness inside the RDBMS. Rather, it is easier because RDBMSs are highly engineering and extensively tested.

You say you still need ACID, but you advocate manually writing joins on top of an ORM layer which does not give you ACID and introduces massive latency. Using an ORM on top of an a key value store doesn't absolve you of ACID or having joins. You still do joins and transactions, but you need to write them manually and test correctness manually. Both are very expensive to do correctly and running some unit tests is not sufficient to show correctness at this level. You are writing these things without recognizing what they are.

The whole NoSQL movement is fundamentally about trading correctness for performance, and usually read performance at that. There is nothing else new about NoSQL databases and ironically many even use a variant of SQL for queries! SQL is just a language, not an implementation.

As for aggregates, yes they are hard. By doing it yourself you are not making the problem less hard. Rather, you are possibly implementing solutions that RDBMS's can't yet do. I suspect that with a reasonable relational design you could probably do whatever aggregates you need to in a high performance manner while still taking advantage of the work done in an RDBMS.

It sounds like you don't understand the requirements of your application. If you need full ACID, your solution with nosql + ORM + manually written joins and queries is failing you. Unless, of course, if you are also writing your own global lock or using some form of lock-free algorithm.

-----

gouranga 723 days ago | link

The ORM layer does give you ACID or rather it doesn't take it away. It does joins and transactions. It's just an abstraction. I suggest you read up on PoEEA [1] as I don't think you understand what an ORM is properly or you've had experience of some rubbish ones like ActiveRecord or Django's ORM. See Unit of Work pattern, identity map etc to start with and work back.

NHibernate, SQLAlchemy, Hibernate are where you need to look and compare.

The ORM does a few joins where a performance gain is possible but we avoid them though as they are expensive on IO and transfer.

Most of our tables match the UI 1:1 and we populate these from the domain model when it is mutated or when we can afford it to be done (either directly or via message queues). Effectively they are materialized views which are updated when data changes or before it is required. Google CQRS.

I'm not suggesting NoSQL - just caution when using RDBMS features. I'm not an advocate of NoSQL.

[1] http://www.amazon.co.uk/Enterprise-Application-Architecture-...

-----

bunderbunder 723 days ago | link

One can't just say "joins are expensive on IO and transfer"; you have to specify exactly what kind of join you're talking about and how it's being used. The way an INNER JOIN will influences how rows are retrieved is completely different from how an OUTER JOIN or CROSS JOIN will.

In all those cases, the devil remains in the details. For example if you're just looking to add one or two columns to the result set, an INNER or OUTER JOIN usually reduces the amount of data that has to be shoved through the pipe. That's because you avoid transmitting the keys that are needed to join the data in-app. On the other hand, if you're joining a small number of larger values to a table with lots of rows, perhaps the join duplicates those values enough to really bulk up the result set. But it's effectively impossible to know whether that will be the case without measuring the performance for a couple different options.

Here's an alternative consideration: ORMs tend to encourage a style of coding wherein every column in the table is always grabbed all the time, regardless of how much of that information is actually needed for the task at hand. This is done in order to be able to generate full-fledged objects like the ORM wants to, and to reduce the amount of custom mapping that needs to be maintained. Now if you're looking for a needless waste of bandwidth, there's a needless waste of bandwidth.

If you're consistently having performance problems on joins, it's likely that the real culprit is poor configuration of keys and indexes. Or if you're not watching how the ORM chooses to join the data too closely, perhaps you've got some N+1 queries sneaking in there.

-----

josephcooney 724 days ago | link

A lot of 'enterprise' products take this route so they can be deployed on top of your RDBMS of choice (Oracle, DB2). They usually get pretty terrible performance out of their RDBMS as a result.

-----

gouranga 723 days ago | link

In fact they tend to optimise per engine.

If you look at NHibernate for example, if you do paging on SQL 2005+ it will use CTE yet on MySQL it will use LIMIT clauses underneath.

Some enterprise products have a really crappy schema with an ORM thrown over the top - that's where the pain is.

-----

josephcooney 723 days ago | link

Your definition of 'enterprise' might be different to mine. I'm thinking of SAP and PeopleSoft. And I think the crappy schema arises (sometimes) from not wanting to use any proprietary DDL features.

-----

sbov 724 days ago | link

You seem to be coming a very specific viewpoint. Not all things you build will require scaling out. Not all systems will become as complex as you seem to be envisioning. Not all systems will be around in 20 years, much less require the types of changes in 20 years that necessitate changing the datastore.

I view it as an optimization problem. Based upon what you're building, the aforementioned issues are more or less likely to occur. Optimize for what is likely.

If I'm building an official site for a popular product, I'd bet on all the above being true.

Some of what I build, even if 100% of people who could use the product would use it, will run just fine with a single database server that costs in the area of $8k. Planning for scale out is a complete waste of time here.

Most of what I build starts out simple. Most of what I build starts out with 0 users. Will it be popular, will it last, will it need to scale? We have no clue. For these, we keep it simple: modularize our code based upon features, use our datastores to their fullest within features, and keep in mind how we might scale things out (which may ultimately be incorrect) - if we have to. Usually we don't have to. I believe it to be a fair balance in this scenario.

Also, given your constraints, I don't quite understand how you could find a future datastore that is a better solution. How can you exploit better solutions and avoid lockin at the same time? If a better solution comes along in the future, it seems like you would be using features that would lock your datastore to that newer, better solution. However, you cannot do this because you need to avoid lockin.

-----

gouranga 723 days ago | link

I agree.

For a fresh start (greenfield) it's probably fine.

Unfortunately for 90%+ of IT products, it's not a greenfield. That's where the problem comes.

-----

joevandyk 724 days ago | link

I've always found doing logic and processing in the database to be much faster and much more efficient than doing it outside. The closer you can process data to the data store, the faster it goes. Using materialized views can get you a long ways.

There's also no reason why you can't use multiple smaller databases. Use dblink. http://www.postgresql.org/docs/9.1/static/contrib-dblink-con...

I've never had a problem managing "schema and dependencies" on larger systems. Not sure what you are referring to there.

-----

gouranga 724 days ago | link

It goes much faster I agree. However you are very short sighted. That black box doesn't go on forever. It will work fine for the first 50 users, seems ok for the next 100, 500 we're doing ok. 1000 BOOM! Then what do you do?

I'd rather have a slightly higher latency and be able to scale to 100,000 users.

Materialized views are not needed with CQRS architecture. ALL tables (or sets) are materialized views already. No joins are ever made.

dblink is ugly and if you ask me a problem, not a solution (latency, network io, memory).

I'm not sure what you classify as "larger" but I'm talking tens of Tb, 1500 tables and over 2 million LOC in the application.

-----

joevandyk 724 days ago | link

Instagram did just fine with postgresql with 27 million users. Many logical shards spread over a smaller amount of physical ones.

http://www.databasesoup.com/2012/04/sharding-postgres-with-i...

-----

gouranga 724 days ago | link

Instagram isn't very complicated to be honest so I doubt that's a good scalability example.

-----

joevandyk 724 days ago | link

Skype's plan to support 1 billion users w/ postgres a few years ago? http://highscalability.com/skype-plans-postgresql-scale-1-bi... and http://www.slideshare.net/adorepump/small-overview-of-skype-...

Billions of records, 10,000 transactions per second. This was a couple of years ago.

Dunno if Skype is complicated enough for you.

-----

erichocean 724 days ago | link

I get the sense he'll ratchet up what "complicated" means until he wins the argument. :)

-----

gouranga 723 days ago | link

No just there is a difference between large and complicated.

-----

bingbing 724 days ago | link

EVE Online runs on a single SQL Server 2008 instance:

http://wiki.eveonline.com/en/wiki/Tranquility#Database_Serve...

serving hundreds of thousands of users

-----

gouranga 723 days ago | link

> At peak hours as of 2007 the database handles around 2-2.5k transactions per second, which generate roughly 40,000 input/output operations per second on the disks.

That's not much...

-----

jacques_chester 723 days ago | link

They're complex transactions. Inserts against dozens of tables, selects from dozens of tables. We're not talking about glorified KV lookups here.

-----

bunderbunder 724 days ago | link

-scalability: I/O is often much more expensive than logic. With a well-normalized, well-indexed database, pushing logic off to the server often gives the query optimizer a chance to radically reduce how much data is retrieved. That can translate to less load on the disk controller, memory, controller, CPU, and network, and by extension more scalability. Of course, it might not work out that way, depending on the specific case. As usual, metrics rule all and slavishly following what should be rules of thumb at best is the worst kind of premature optimization.

- Compelxity, lock in: The DBMS is like any other software component: If you're actually using it to its capabilities, then it's trivially obvious that it should be difficult to replace, because there are a lot of capabilities you have to find substitutes for. True, you could avoid being coupled to your dependencies by choosing to reinvent wheels whenever possible instead. That would obviously reduce coupling. But bloating your codebase is generally not the best way to make your code more maintainable.

- schema: Pretty much any programming task is hell for people who aren't skilled at it. If you find that you're not very good at using the standard tools of a carpenter, that does not mean that the tools are fundamentally misdesigned. Much more likely, it means that you should be hiring someone else to do your carpentry for you. It's worth pointing out here that programmers shouldn't need to touch the schema in a well-designed system: They should be touching views and stored procedures that abstract away the implementation details.

-----

moe 724 days ago | link

Scale up is damn expensive. When you need that 64 core 96Gb machine to run it all on with 6 replicas will see what I mean.

Damn expensive, eh?

As it happens I bought a couple boxes that size just recently. Except with 256G RAM instead of 96G. They cost $8500 per shot. That's not even 2 months of salary for a single developer.

Don't underestimate Moore's Law.

-----

gouranga 723 days ago | link

That bit is cheap.

How much does the storage subsystem cost?

You know the whole stack of fully redundant iSCSI/FC kit behind it?

-----

moe 723 days ago | link

How much does the storage subsystem cost?

15x 146G SAS spindles are included with the price.

You know the whole stack of fully redundant iSCSI/FC kit behind it?

Why, yes? We do run a FC fabric and various DAS boxes for various purposes, but not on the DB servers.

I'm not sure what you're arguing about here, the figures (labor vs hardware cost) are not even in the same ballpark.

As a rule of thumb: If a piece of gear can save your team 1 month of work then it usually pays for itself.

-----

gouranga 723 days ago | link

Where do you but that from - I'm genuinely interested.

-----

moe 723 days ago | link

Our Colo-Partner acquires and assembles the parts for us (mostly SuperMicro).

In the US I've heard good things about ThinkMate. E.g. this is close to our configuration but with a different chassis; http://www.thinkmate.com/System/RAX_QS5-4410

-----

andyzweb 723 days ago | link

http://teddziuba.com/2010/12/the-3-basic-tools-of-systems-en...

http://blogs.msdn.com/b/elee/archive/2009/03/11/source-code-...

In general throw money at the problem first, resort to code last

-----

ams6110 724 days ago | link

If you are paying for your RDBMS (e.g. Oracle, SQL Server) and not using its features you are throwing your money away. Even if you are using a "free" database, if you're just tossing aside things like data types, permissions, referential integrity and check constraints, natural declarative handling of sets, and writing all that in your client application what happens when you want to put another front end on your data? You have to do all that again. Where's the hell then?

What happens when someone has to do a SQL update manually? Referential integrity can save you from shooting yourself in the foot there. SQL injection? Much less likely if you're using stored procedures with parameters as your client interface.

You have lock-in all over the place regardless. Languages. Frameworks. ORMs. Operating systems. All are decisions that are painful to change when something "better" comes along. You're better off making those decisions and then leveraging them for all you can get. RDBMSs have been around longer than many languages and most modern frameworks, and are still a very good, probably the best, general purpose solution for data storage and management.

-----

gouranga 723 days ago | link

We're paying for the support and insurance against failure, not the features.

You don't do manual SQL updates. Why would you if you have an ORM? You update the domain and it does all of that stuff for you. I only care about the domain which is the stuff that makes the money - not writing SQL.

ORMs don't suffer from SQL injection as they use properly parameterised prepared statements.

Regarding lock in you are correct, but the RDBMS is the most politicised and expensive component which is why yo uneed the flexibility.

I am in no way saying "RDBMSs are shit - NoSQL forever!".

-----

sciurus 724 days ago | link

"The DBMS is a black box"

The DBA you refuse to hire would likely disagree.

-----

gouranga 723 days ago | link

Which is why we don't like them :)

Databases are for putting stuff in when you don't immediately need it. The DBA mindset pushes that the DB is the hammer for all nails.

-----

tensor 723 days ago | link

Your ORM is a black box. One you don't seem to understand on top. Also, ironically, you are pushing your anti-RDBMS agenda as a solution for all problems complete with information devoid attacks such as "The world changes pretty quick. Those who adapt quickly stay around."Among others.

You clearly are not in willing to reflect on the actual problems being presented to you. Instead you are clinging desperately to some sort of ideology that either you yourself came up with or someone sold you on.

Look at the basic problems. You are not solving them or making them go away by "abstracting them away." The reason is that these particular problems can't be abstracted away.

For example, you raise joins into the code that your programmers write instead of the RDMBS level. But you are still writing joins, they are just for loops on top of your ORM solution. Now you incur the cost of writing them manually each time and testing them, also dealing with ACID which almost surely you are not doing correctly. Or if you are, you are spending huge amounts of money to make it right. Even worse, your solution can't possible be as fast as the RDBMS solution since you need to go through all sorts of heavy abstraction.

You are not seeing the actual problems at all and not surprisingly demonizing the type of people who do understand them: DBAs (assuming we are talking about expensive professional DBAs).

-----

gouranga 723 days ago | link

Did I say I was anti-RDBMS? Not at all. I'm just careful. I still use RDBMS for everything - just differently and I avoid a fair amount of the tempting features. I'm not an advocate of NoSQL.

The ideology came out of necessity and 20 years of watching people fall down the same holes.

Joins - loops on top of ORM? Doing ACID myself? Do you understand what a non-toy ORM does? Do you understand the separation between the data model and the domain model? Sounds like you don't.

-----

tensor 723 days ago | link

Let's take an example from SQLAlchemy. I understand this is an ORM you feel is decent.

http://docs.sqlalchemy.org/en/latest/orm/query.html

Note the section on joins. Their example is simple, a mapping between users and addresses. The ORM here generates a join in SQL. You are not avoiding it. The only alternative to generating SQL with a join is to write the join yourself. Such a thing would query users and then either lookup the address via an index or do a nested scan over addresses if you don't have an index. There is no way around it. Conceptually, a join is a description of a result set that you want. If you generate it, you are doing a join in some way.

If you are doing full ACID with complex queries, then you still need to understand transactions and all the associated complexities. An ORM doesn't make any of that go away by using terminologies like sessions. Rather it limits you to some subset of SQL and schema design.

If you want to limit your use of SQL for portability reasons, you can still do that without an ORM. At least that way you are more aware of what is actually going on under the hood and can more easily identify problems.

-----

fallous 724 days ago | link

Don't confuse the "RDBMS uber alles!" for all storage is equivalent to "RDBMS is always a fail." Certainly over the last decade or so there was a mistaken belief that all data should be in an RDBMS, even though they were flattened or required no relational algebra, but that doesn't discount the benefit of a tool that can handle such relations when necessary.

Redis solves a certain set of problems for me, Memcached as well. ElasticSearch solves another set, and RDBMS yet another. Square pegs for square holes, round pegs for round holes.

-----

Duff 724 days ago | link

Huh? Your worried about complexity and lock in, so you want devs to replicate tried and true database features?

I can understand wanting to avoid Oracle lockin.... But Postgres?

-----

gouranga 723 days ago | link

There is no replication of features at all (backup/restore is fundamentally different - it exports the domain to disk, not the schema).

Well we could shift a deployment to Amazon RDS (Oracle or MySQL) quite happily in a couple of hours without having to rewrite everything because we used all the postgresql features...

-----

IsTom 724 days ago | link

KV stores are painful to use. Only huge applications that are DB-bound have the problem of scaling database and they usually have a lot of money. Also you don't need your replicas to be as strong as your master db server.

-----

gouranga 723 days ago | link

I agree. I'm not advocating KV stores.

-----

platonichvn 724 days ago | link

I agree with you but I often question when people list "lock in" as an issue. How often do you see projects where they decide to move from one db to another? Even in cases where an ORM is used, making the migration process painless in theory, I doubt many projects take the risk and exercise their freedom of not being locked in.

-----

joevandyk 724 days ago | link

I've only seen people move to postgresql (generally from mysql). I've never heard of people moving away from postgresql. Granted, that's a small sample size.

-----

pvh 724 days ago | link

Etsy did. I don't think it was a good decision and I think the reasons they did had more to do with resetting from some bad architecture decisions, but they're the example I know of. http://codeascraft.etsy.com/2011/09/28/scaling-etsy-what-wen... I think the big lesson here would be: don't have separate DBA and Engineering teams.

On the other hand, I could name quite a few people who went the other way, like Urban Dictionary who run on our http://postgres.heroku.com service, too. (I don't normally name customers, but he recorded a video saying he liked us so I think it's fair game. http://success.heroku.com/urbandictionary )

-----

gouranga 724 days ago | link

If you look at most "corporate" IT platforms, they tend to have favourite database engines that they need to use. If you have DBMS portability, then it gets one foot in the door straight away.

Also contrary to the norm here, some applications last a long time. The one I work with daily is actually 20 years old this year. In that time a lot has changed and in the next 20 years a lot will as well. It makes sense for long-lived applications to plan well ahead and consider paradigm and architecture changes.

This platform started in C++ on OS/2 with Oracle running on VAX/VMS at the back end. It went to Java/J2EE with an Oracle back end in the early 00's. It's now SQL Server and .Net.

-----

cageface 724 days ago | link

You have to admit that a 20 year old app with over a PB of data and millions of lines of code is not exactly typical. For most apps the benefits of a structured database outweigh the costs.

-----

neeleshs 724 days ago | link

Interesting to know what abstractions were found useful when migrating from VAX/VMS to Java/J2EE to .Net.

-----

gouranga 724 days ago | link

Hibernate and nhibernate.

-----

gouranga 724 days ago | link

I'd like to know the reason that was downvoted...

-----

it 724 days ago | link

On the other hand, SQL lets you write concise and efficient queries that run without lots of round-trips between the db and whatever machine you're accessing it from. ORMs force you to write less expressive code that is less efficient, hardly a winning combination.

-----

gouranga 723 days ago | link

To be honest, if you have a good ORM, it's better at writing SQL than you are.

Compare to compilers being better at writing machine code...

-----

kfool 723 days ago | link

- schema. Maintaining schema and dependencies is hell on all but a small system

When you have the tools you have the power.

-----

jacques_chester 723 days ago | link

Being "locked in" to an opensource database doesn't strike me as some kind of unbearable death trap.

-----

lwat 724 days ago | link

I disagree with everything you say here. I seriously doubt you've been using databases for 20 years. Which databases have you used? How many hours experience do you have with each of them? Just because some people do it wrong doesn't mean you should throw out the baby with the bath water! In every stack I've used the database has been the most robust and amazing piece of technology and it's been an absolute dream come true for every project I've been involved in. (SQL Server / PostgreSQL - 10,000 hours experience)

-----

gouranga 723 days ago | link

It is robust and amazing. I agree but if you push it to the limits it will bite you on the arse.

Probably about the same as you on just SQL Server and half as much again on Oracle...

-----

lwat 723 days ago | link

If you push ANYTHING to the limits it will cause issues. I don't see how that's an argument against properly utilizing an RDBMS.

-----

jamesli 724 days ago | link

Both Postgres and MySQL are great. In history, Postgres emphasized more on feature development instead of performance, while MySQL took the opposite approach. It depends on your engineering and operation requirements to choose which one to deploy. Most of OP's points, however, need to have further consideration, IMHO.

- "While replication is indeed very important, are users actually setting up replication each time with MySQL or is it to only have the option later?"

Replication is not an option. It is a must-have for any serious products, both in scaling and in operation.

- Windows functions: They are wonderful and I love them. But it is not an important factor at all.

- Flexible datatypes: True in certain scenarios. It allows to create certain types to map a business object with unusual requirements. Otherwise, the requirements have to be implemented in application logic. But data type of Array? IMHO, use of Array data type in relational database usually means there are some issues in data modeling and design. The performance is another issue. If you have to use array, consider NoSQL options.

- Functions. Although I am a database architect, I use database functions only for simple poking around. For any serious work, I prefer to write application code. It is easier to maintain, to test, and to extend. I don't want to have a big muddy ball. I prefer to have structured, well-decoupled application code.

- Custom Language. Same as above. Why not just write application code?

- Extensions: Geospatial is awesome. It is better than that in MySQL. But for many extensions, I prefer not to do the computation in database. Databases are usually the most expensive resources (in operation costs) and are usually the performance bottle neck. It would be better to have the computation in app servers. It is easier to scale and it is cheaper.

- Custom Functions: see above.

- Common Table Expressions: Recursive queries are awesome. I t is nice to have for ad-hoc data inspection, but I wouldn't encourage its usage in application code. The risk to use it wrong vs the benefits are high if used in application code.

-----

fdr 723 days ago | link

> - Windows functions: They are wonderful and I love them. But it is not an important factor at all.

I disagree or would choose to qualify a number of the things you wrote, but I'll choose this one because I think it's the most interesting:

While window functions are seldom (but not never) used by common application workloads, but I find them pretty important for lowering the barrier of what one-off queries I can write (hence, more questions are economic to answer) to figure out some statistics that assist doing business or solving strange problems. The alternative is to be forced to make a greater number of decisions with less supporting data.

This can be applied to any labor-saving device when it comes to reporting, but I think window functions make the cut for me overall, and it's great they are there.

-----

CoffeeDregs 724 days ago | link

I know this isn't a MySQL versus PostgreSQL thread, but I was bitten by MySQL today...

The issue: I ran a SELECT with a WHERE against a column that is an "int(11)", but I meant to run it against another column that is "varchar(255)". Something like: SELECT * FROM table WHERE wrong_column = "1abcdefghijk";

Somehow MySQL cast a "varchar(255)" to an "int(11)" with a value "1" without telling me. WTF? My result set was approximately 1M rows. I expected 1 row, so knew the result was wrong... But what if I had expected more than 1 row? Then the bizarre results from MySQL would have appeared to make sense.

-----

te_chris 724 days ago | link

Maybe there's a deeper reason PHP and MySQL are so closely linked.

-----

CoffeeDregs 724 days ago | link

Funny: I thought the exact same thing as I was writing my comment...

-----

pvh 724 days ago | link

This is my least favorite MySQL misfeature, and it's something that users migrating FROM MySQL often rely on in their application and get surprised/angry when they encounter errors rather than silent successes.

-----

DrJokepu 724 days ago | link

PostgreSQL is awesome (I really love it), now let's do some complaining:

* I'm not fan of doing backups with pg_dump. I think it would be a lot more awesome if I could tell the database to back itself up in-band. Even better, I would like to be able to ask the database to schedule its own backups, instead of having to set up my own cron jobs (or equivalents). Commercial databases can do that.

* The sad state of libpq (the C API). Libpq is where PostgreSQL really shows its age. It's very cumbersome to use and can't do (or can't do well) a bunch of things the raw protocol is capable of. Since almost all PostgreSQL bindings are built on libpq, none of the bindings support these features either.

* It would be cool if it was possible to kill sessions in-band (other database servers can do that). With PostgreSQL, the only way to shut down misbehaving sessions is sshing in to the database server and doing a `kill` on the process of the session.

* The infamous COUNT(*) fiasco.

-----

reactor 724 days ago | link

There is no fiasco in count(*), it is THE general behavior of any MVCC database. Try the same with MySQL InnoDB, you get the same result (or worse). MySQL MyISAM DB gets the result fast because it is non-trasactional. To get approximate count very quickly on Pg, you can use -- select reltuples from pg_class where relname='<table_name>';

-----

jeltz 723 days ago | link

Indeed, in the benchmarks I have seen PostgreSQL is faster then InnoDB at running count(*) over entire tables but slower when a indexable WHERE clause was added due to the lack of index only scans in PostgreSQL. This may change in 9.2.

-----

pvh 724 days ago | link

* Check out wal-e (https://github.com/heroku/wal-e), we wrote it and we manage 250k postgres databases with it. :)

* libpq does basically suck, but go has a pretty great native implementation. suggestions are welcome for improvement, patches are even better.

* pg_cancel_backend() plus select-fu on the `pg_locks` table joined with `pg_stat_activity` = joy

* fast count(*) is possible via index-only scans in 9.2

-----

fdr 723 days ago | link

> fast count() is possible via index-only scans in 9.2

I'd like to expand on this. It will be faster, but still involves an O(n) over the qualifying data, but it'll be a lot less data, particularly for tables with wider rows. To some extent this problem is fundamentally unable to reach a small O(1) without introducing more lock contention and expense that many people don't want to pay continuously and across every possible query or dimension that can exist (in the latter case, it is clearly impossible given the existence of arbitrary WHERE clauses), but I think it'd be nice if people could add a new physical construct whereby count(*) queries on certain dimensions can be made faster at the cost of lock contention or resource consumption, not entirely unlike indexes.

-----

dkadams 724 days ago | link

Depending on how badly the session is misbehaving: http://www.postgresql.org/docs/9.1/static/libpq-cancel.html or pg_cancel_backend(pid int) or pg_terminate_backend(pid int)

-----

DrJokepu 724 days ago | link

Oh that's very cool, I din't know about that one. I guess this item can be removed from my list.

-----

fdr 724 days ago | link

Internally there is also an interesting mechanism: when one connects, the backend hands you a "backend key", which is a 64 bit number. The client is intended to remember that number. Cancellations are processed by sending a cancel request to the server -- which involves a special new connection and a different first packet -- including that secret number. As an interesting security precaution, absolutely no acknowledgement or response is made to the cancellation request.

libpq knows how to do all this.

The reason for this is that to avoid inspecting the connection for new instructions during query processing, acting under the assumption that cancellations will be rare.

http://www.postgresql.org/docs/9.1/interactive/protocol-flow...

-----

lobster_johnson 722 days ago | link

As others have pointed out, there is no such "fiasco". However, if you want to simply count the entire table and don't care if the number is exact within the current transaction, you can cheat and do:

    select n_live_tup from pg_stat_user_tables
    where relname = 'your_table_name'
This will access the internal statistics about how many tuples are in the table. It ignores the transaction's isolation level since it's a global value, and it may be a few milliseconds out of date, but it's otherwise correct.

-----

More



Lists | RSS | Bookmarklet | Guidelines | FAQ | DMCA | News News | Feature Requests | Bugs | Y Combinator | Apply | Library

Search: