Hacker News new | past | comments | ask | show | jobs | submit login
Why Use Postgres (craigkerstiens.com)
269 points by craigkerstiens on Apr 30, 2012 | hide | past | favorite | 144 comments

Left out the fact that Schema-changing operations are transactional (big win over MySQL).

[Edit: In migrations, specifically. Also! No mention of the MySQL dual-license JDBC driver malarky... Not that it's a success of Postgres' as much as it's a failure of MySQL's... ]

I'd never heard of Window Functions (not an Oracle user, and been on MySQL for a few years... left Postgres for replication and haven't made it back quite yet).. but I desperately want them. The idea of effectively aggregating over groups in a group by (which is how I understand these Window doohickeys) is something I come up on a few times a year and get really frustrated by.

I think the only killer feature I'm left wanting from some RDBMS at this point is managed-sequences... The idea that I have a related list of things that I want to keep a sequence column for, and would like the database to understand the concept and help me maintain it.

> I'd never heard of Window Functions

Go ahead and give it a shot. Once you get used to it, you cannot live without it.

Another killer feature in my opinion is the recursive query: http://www.postgresql.org/docs/current/static/queries-with.h...

And, to go along with the recursive queries, the GEQO - genetic query optimizer, for when your queries are NP-complete to even optimize.

I wish I had know of these sooner, with MySQL I was jumping through all kinds of hoops to organize hierarchical data in my DB without generating insane amounts of queries.

What's the trouble with PostgreSQL's sequence support? (http://www.postgresql.org/docs/9.1/interactive/sql-createseq...)

You got me really excited, but I don't think this is what I want. Imagine a table "Chapter" with a Foreign Key "Book". Not uncommon would be to have a 'sequence' or 'rank' column in the Chapter table defining it's order within a book. Sequences shouldn't overlap (accomplishable with a unique constraint on sequence and book_id), but they also shouldn't have gaps, and it'd be nice if inserting a Chapter at a sequence had predictable behavior (like.. it incremented all chapters in that book with sequence >= it's).

I find it a pretty typical and profound source of frustration, and since the whole idea of RDBMS is to model relationships.. and relationships so often have integral sequence/rank.. it's always seemed to me like something that ought to've be considered 'in scope' during the design of SQL.

OK, that's definitely a matter for a custom trigger on the table.

Another option would be to keep the chapter order in an array column on the Book table, since arguably the arrangement of chapters belongs to the book rather than the chapter.

PostgreSQL array documentation is here: http://www.postgresql.org/docs/9.1/static/arrays.html

arguably the arrangement of chapters belongs to the book rather than the chapter

That's a really excellent point, and array columns look like a really cool solution! I suppose the only bummer is that it introduces a way to get things out of sync (I could delete a Chapter row, but forget to remove it from the array in Book), but in general this is a much better approach than the status-quo solution.

I think you should be able to write a delete rule on Chapter that will ensure that no array in Book contains that chapter, whether through cascading the delete or through raising an error.

If you're ever thinking of writing a trigger for any purpose, think again. And again. They are used improperly far more often than not. Things should not happen as "magic" side effects in a database, which is what triggers do. Generally better to be explicit with code in a function or stored procedure.

In general I disagree, but in this specific case you are right that a procedural interface to manipulating books/chapters is preferable.

Essentially if your API is single-table DML, then triggers are preferable. If multiple tables need to be manipulated at the same time to complete the full operation, then there should be a procedural API and triggers are likely unnecessary.

You should investigate exclusion constraints. Another scenario where they come in handy is a calendar interface where you need to ensure no overlapping per record per time.

Indeed, exclusion constrains are really neat! FTR, here's their description in the PostgreSQL manual:


Oh wow. These are pretty wacky/interesting. Also, sign of a well designed API: I think I gleaned more understanding from reading an actual EXCLUDE constraint definition than I did from reading the description of what it does in the docs.

The problem is that what you want can't be had within an MVCC framework. To get what you want, you really need global, thread-safe mutable data. MVCC instead gives you multiple parallel universes that spring into existence and then disappear, possibly making indelible changes on the next parallel universes to be created in the process.

It boils down to two problems. You can't be sure you're the only transaction in progress, and you also can't "take back" a sequence number. If you could guarantee you were the only transaction, you could just use COUNT. But since you can't, you could have two transactions in progress that both get the same COUNT result and you wind up with two "Chapter 5"s. You can put a UNIQUE constraint on that table, and then one of your transactions fails that constraint. That way you retain integrity and you get (at least for a while) sequential numbering, but there is no in-database way to ensure that a sequence of numbers has no gaps. This is because to do so would be expensive but also not really relate to data integrity.

The purpose of PostgreSQL sequences is to reliably generate surrogate keys. That means every transaction in progress gets a separate ID from that sequence. They all have to be unique because they all might commit. If they rollback instead, it doesn't matter; those become gaps, but nobody is failing the UNIQUE constraint.

What are you going to do to seal the gap when you have two transactions in progress on the same sequence and one of them rolls back? Are you going to pause the other transaction and propagate some kind of decrement to the other transactions? What if I inserted that ID into another table or called a globally-visible stored procedure (sent an email, for example)? I'm not certain it would even be possible, but even if it were, it would clearly be extremely expensive, and therefore, probably not a feature you want on by default.

So I hope that explains why this is not "in scope." :)

> I think the only killer feature I'm left wanting from some RDBMS at this point is managed-sequences...

I would like some RDBMS to give sharable transactions. What I mean by that is something like this.

1. When I begin a transaction, I can assign it a name.

2. If multiple connections to the database are in transactions with the same name, they can see the changes made by the others, as if none of them were in a transaction. Connections not party to the shared transaction do not see the changes until all of the connections in the shared transaction commit.

3. The connections in the shared transaction can do unnamed transactions, which isolate them from the others within the shared transaction.

The idea here is that sometimes I have some operation that I would like to do atomically, but that involves multiple processes acting on the database. Process A wants to make some changes, then pass responsibility on to process B (which may be on a separate machine) to make more changes, and then process C finishes up, and if any of them encounter an error I want to rollback to the state before A started.

Interesting. Without knowing what kind of system you're talking about I'm suspicious that I'd use this. I certainly have distributed problems like this where a logical concept is broken into a number of steps, and while I want to distribute the steps.. the effects of them to the entire system might not be realized until all of them are complete.

I generally wouldn't defer that responsibility to the database though. I'd more often handle this as multiple database transactions that advance a state machine, at the end of which consequence-inducing-events are fired off.

The idea that process A tickles the database into some state expected to be understood (and only seen by) process B seems like too flimsy a contract to me.

I get your point, but there is value in delegating to the database--how do I roll back an external state machine?

See the command pattern: http://en.wikipedia.org/wiki/Command_pattern

Basically you encapsulate whatever you need to do with the database in objects (logic + parameters). And you describe for these objects not only the logic for updating the database (forward), but also for reverting the database to its previous state (rollback).

Yep, I know the command pattern--it requires knowledge of all state up front and breaks down in the presence of distributed state. It makes statements like 'update all records where value x is y' difficult. You need to gather all such records first, possibly guarding their current state with a timestamp/vector clock (we are talking distributed systems after all), make the change encapsulating the current state in the command pattern and commit. If you have to extend this model to multiple queries, you'll be rolling your own transaction protocol lest you expose partial state to other queries in transit.

Or you could delegate all this to the database, which already has its own transaction protocol.

I believe this is implemented application-side for SQL Server by the Microsoft Distributed Transaction Coordinator, which allows enlisting in existing transactions.

Some explanation of how it works: http://stackoverflow.com/questions/494550/how-does-transacti...

Usage to 'forget' automated test database changes: http://amnesia.codeplex.com

At an initial glance, I think you're describing nested transactions.

perhaps prepared-transactions might have some of the functionality you are looking for http://www.postgresql.org/docs/9.1/static/sql-prepare-transa...

I think the only killer feature I'm left wanting from some RDBMS at this point is managed-sequences...

Excuse me if I didn't completely understand what you wanted but it seemed similar to SQL Server's sequence: http://msdn.microsoft.com/en-us/library/ff878091.aspx

Thanks, but nope.. See my comment to @empthought below, who pointed at the same feature in Postgres. I was sloppy by using the term 'sequence' which already has some meaning in RDBMS apparently.

I really disagree with this. The more features you use from your DBMS vendor, the more you shoot yourself in the following departments:

- scalability. Logic and processing in the server is bad as it means you can only scale up and not scale out. Scale up is damn expensive. When you need that 64 core 96Gb machine to run it all on with 6 replicas will see what I mean.

- complexity. The DBMS is a black box which sucks up tight coupling almost instantly. Coupling is bad for maintenance and scalability. SQL always ends up with heavy coupling.

- lock in. Those features only work with postgres so you're stuck with it forever or induce a lot of cost to move away when a better solution comes along.

- schema. Maintaining schema and dependencies is hell on all but a small system.

These facts come from 20 years of using databases.

Stuff that will save you: Use your DBMS as a simple flat store akin to a document store. Use a CQRS based architecture and encapsulate ALL of your logic in code. If you have to go at least partially relational for god's sake use an ORM that supports several databases well. Make a provider independent app level backup/restore facility such as the one with JIRA or TeamCity. NEVER hire a DBA - they exist only to create walled gardens to protect their existence.

My current gig has cocked up on all areas and it's costing them over 40% of their turnover keeping it alive.

Happy databasing :)

Throwing out decades of work on storing data just because it's a hard problem won't help you solve the problem. All you are advocating is that instead of using an existing well tested solution, you should write your own and deal with all the assorted costs of redoing that work, and probably poorly.

Similarly, using an ORM doesn't help you. All that does is hide the details from you. It still uses SQL underneath, but limits what you can do with it.

Let's talk schemas. You have one whether your database understands it or not. Trying to pretend you don't have one doesn't make managing your schema any easier. It doesn't make building indicies on fields any easier.

These are hard problem and pretending they don't exist by rolling your own implementation or hiding them under an ORM won't make them go away.

Decades of work is pretty wrong to be honest. They approach the wrong end of the problem. They never consider efferent coupling between database objects which is the absolute killer in anything but simple projects regardless of what is promoted or discussed. SQL makes that an order of magnitude worse.

It's also not about it being a hard problem (which it incidentally is but that's another story). It's about controlling the scale of the problem as time goes on and sufficiently abstracting it without adding complexity. That's not hiding it - that's making it a non issue. Permanently.

I am only advocating using a minimal subset of the functionality to aid migration in the future between engines, platforms, versions and storage paradigms.

An ORM (in our case NHibernate) is a clean abstraction over the top which allows us to be completely mobile between vendors and versions. There is no NIH in it.

The world changes pretty quick. Those who adapt quickly stay around.

An ORM is perhaps a clean abstraction. But it is also a very simplistic abstraction; it generally doesn't provide the greatest facilities for accessing all sorts of useful features that the DBMS provides.

An ORM does not replace proper database design. And letting the ORM do all your work for you w/r/t to mobility means accepting that you will not have good access to anything but the least common denominator among databases. Which is approximately the SQL-92 standard.

Another choice, which doesn't involve cutting off one's nose to spite one's face, is to couple the ORM to a public interface which is defined in terms of a specific set of views and stored procedures. Then you can very easily write schema which conforms to this interface whenever you need to switch to a new DBMS. The new schema is then free to take advantage of whatever features that DBMS provides in order to achieve good performance.

And the potential performance gains can be huge. There are plenty of examples out there of replacing 1-hour basic SQL + in-app processing procedures with 100ms common table expressions and the like. If you've got someone who knows what they're doing in charge of it, the latter solution will often take quite a bit less time to write and debug, too.

> If you've got someone who knows what they're > doing in charge of it, the latter solution will often > take quite a bit less time to write and debug, too."

Ah, therein lies the rub. Spot on.

My obversations:

The last decade has seen a gradual weening off of reliance on the database stack for processing any form of "logic". This has been a evolutionary disaster. When companies like Sun pushed out application platforms such as Java, Microsoft followed suite to remain in the game. That's where the action was and that's where their revenues came from for the good part of 1998-2008 (still does). As a result, the newcomer programmers find it unfathomable to position any amount of processing in the hands of RDBMSs. These newlings choke at the idea of stored procedures or triggers - citing that these constructs are a step in the WRONG direction. Set-based SQL operations remain an anathema for a large majority of the new programming stock.

This is a pity because the side-effect of this attention diversion has been poorer data modeling skills and gross underutilisation of new and often improved RDBMS concepts & tools.

In the end, these new-school graduates of denormalisers and NoSQLers will have to contend with the one and only critical challenge that RDBMSs set out to fix: data correctness.

Performance & scalability mean shyte if you can't guarantee correctness. Bring to the table ACID transactions and MVCC with your new concepts of scalability and performance - then we talk ... kinda thing.


It's even harder to VERIFY correctness inside the DBMS with respect to keeping it managable and debuggable. That's why we pull it out into code behind an ORM. We can run a fully automated test suite in 30 minutes which will verify that the application will function as expected from the domain model.

Believe me, we still need ACID/MVCC and by no means do I want to break the consistency part of CAP theorem if possible.

Also set operations break down under the following conditions:

1. when you have to aggregate a LOT of data together they don't scale. If it scaled well we wouldn't need OLAP solutions and could do everything with live, perfectly normalised data. That is just not the way it is.

2. They really don't work for temporal or complex non-uniformly structured data at all well.

It is not harder to verify correctness inside the RDBMS. Rather, it is easier because RDBMSs are highly engineering and extensively tested.

You say you still need ACID, but you advocate manually writing joins on top of an ORM layer which does not give you ACID and introduces massive latency. Using an ORM on top of an a key value store doesn't absolve you of ACID or having joins. You still do joins and transactions, but you need to write them manually and test correctness manually. Both are very expensive to do correctly and running some unit tests is not sufficient to show correctness at this level. You are writing these things without recognizing what they are.

The whole NoSQL movement is fundamentally about trading correctness for performance, and usually read performance at that. There is nothing else new about NoSQL databases and ironically many even use a variant of SQL for queries! SQL is just a language, not an implementation.

As for aggregates, yes they are hard. By doing it yourself you are not making the problem less hard. Rather, you are possibly implementing solutions that RDBMS's can't yet do. I suspect that with a reasonable relational design you could probably do whatever aggregates you need to in a high performance manner while still taking advantage of the work done in an RDBMS.

It sounds like you don't understand the requirements of your application. If you need full ACID, your solution with nosql + ORM + manually written joins and queries is failing you. Unless, of course, if you are also writing your own global lock or using some form of lock-free algorithm.

The ORM layer does give you ACID or rather it doesn't take it away. It does joins and transactions. It's just an abstraction. I suggest you read up on PoEEA [1] as I don't think you understand what an ORM is properly or you've had experience of some rubbish ones like ActiveRecord or Django's ORM. See Unit of Work pattern, identity map etc to start with and work back.

NHibernate, SQLAlchemy, Hibernate are where you need to look and compare.

The ORM does a few joins where a performance gain is possible but we avoid them though as they are expensive on IO and transfer.

Most of our tables match the UI 1:1 and we populate these from the domain model when it is mutated or when we can afford it to be done (either directly or via message queues). Effectively they are materialized views which are updated when data changes or before it is required. Google CQRS.

I'm not suggesting NoSQL - just caution when using RDBMS features. I'm not an advocate of NoSQL.

[1] http://www.amazon.co.uk/Enterprise-Application-Architecture-...

One can't just say "joins are expensive on IO and transfer"; you have to specify exactly what kind of join you're talking about and how it's being used. The way an INNER JOIN will influences how rows are retrieved is completely different from how an OUTER JOIN or CROSS JOIN will.

In all those cases, the devil remains in the details. For example if you're just looking to add one or two columns to the result set, an INNER or OUTER JOIN usually reduces the amount of data that has to be shoved through the pipe. That's because you avoid transmitting the keys that are needed to join the data in-app. On the other hand, if you're joining a small number of larger values to a table with lots of rows, perhaps the join duplicates those values enough to really bulk up the result set. But it's effectively impossible to know whether that will be the case without measuring the performance for a couple different options.

Here's an alternative consideration: ORMs tend to encourage a style of coding wherein every column in the table is always grabbed all the time, regardless of how much of that information is actually needed for the task at hand. This is done in order to be able to generate full-fledged objects like the ORM wants to, and to reduce the amount of custom mapping that needs to be maintained. Now if you're looking for a needless waste of bandwidth, there's a needless waste of bandwidth.

If you're consistently having performance problems on joins, it's likely that the real culprit is poor configuration of keys and indexes. Or if you're not watching how the ORM chooses to join the data too closely, perhaps you've got some N+1 queries sneaking in there.

"Performance & scalability mean shyte if you can't guarantee correctness" - well that depends...

I take it you are aware of Brewer's CAP theorem? (http://www.julianbrowne.com/article/viewer/brewers-cap-theor...) If not, you should be.

Quote from link: "That you can only guarantee two of Consistency, Availability and Partition Tolerance is real and evidenced by the most successful websites on the planet. If it works for them I see no reason why the same trade-offs shouldn't be considered in everyday design in corporate environments."

Whoah. This is a type of sociopathic reasoning applied to data modeling. The premise is based on accepting compromise knowing fully the consequences of data corruption and yet failing to address the actual rider: data shall not be left in an inconsistent state. An argument turned on its head. Much like asking the question “is it ethical to be deliberately inaccurate on your IRS returns” is answered with “this is not a question about ethics”. Is it not a gigantic assumption to claim:

1. The compromise has “worked” for others (“most successful websites on the planet” [sic]) without providing any real evidence supporting this claim? What does “worked” mean? Is Twitter “ok” with providing suggestions for Followers which contain people already in the user’s follow list? What were the consequences of data inconsistencies in matters related to financial transactions? Was every requirement able to simulate a catch-all “place a back-order” solution? These are real questions that have to be answered before evangelising the compromise.

2. If it works for others (meaning: if you are willing to accept compromises about data consistency) this compromise ought to be brought to your garden-variety corporate environment? Gasp! As it is, volumes or scalability issues aside, corporate environments suffer from data inconsistencies, poor or no transactional control, ineffective referential integrity etc. There’s very little positive impact of encouraging this habit as an overt practise under the false generalised promise of “you will eventually need to scale to millions”.

They can keep their CAP theorem or BASE principles for the likes of social toys like Twitter and Facebook. When credit card companies and banks start accepting compromises on their transactions, I’ll start paying due attention.

It's seldom necessary and often impractical to have full-blown shared-everything everywhere. There is a yawning gap going from simple key-value to the relatively rich constraints and transaction processing SQL standardizes that is probably not necessary, and I think will be addressed with more programming (rich constraints and data models can exist over a smaller amount of data without trouble). Consider Cassanda column families vs. simple binary-string-to-binary-string systems, and it's already a significant advancement. I think that's the direction growth is going to take.

Most of these large -- and particularly, global -- systems you speak of are eventually consistent also; they use the relational database as a potent backend member of a distributed system, able to check and deal with a great many constraint problems that can be localized.

A lot of 'enterprise' products take this route so they can be deployed on top of your RDBMS of choice (Oracle, DB2). They usually get pretty terrible performance out of their RDBMS as a result.

In fact they tend to optimise per engine.

If you look at NHibernate for example, if you do paging on SQL 2005+ it will use CTE yet on MySQL it will use LIMIT clauses underneath.

Some enterprise products have a really crappy schema with an ORM thrown over the top - that's where the pain is.

Your definition of 'enterprise' might be different to mine. I'm thinking of SAP and PeopleSoft. And I think the crappy schema arises (sometimes) from not wanting to use any proprietary DDL features.

I've always found doing logic and processing in the database to be much faster and much more efficient than doing it outside. The closer you can process data to the data store, the faster it goes. Using materialized views can get you a long ways.

There's also no reason why you can't use multiple smaller databases. Use dblink. http://www.postgresql.org/docs/9.1/static/contrib-dblink-con...

I've never had a problem managing "schema and dependencies" on larger systems. Not sure what you are referring to there.

It goes much faster I agree. However you are very short sighted. That black box doesn't go on forever. It will work fine for the first 50 users, seems ok for the next 100, 500 we're doing ok. 1000 BOOM! Then what do you do?

I'd rather have a slightly higher latency and be able to scale to 100,000 users.

Materialized views are not needed with CQRS architecture. ALL tables (or sets) are materialized views already. No joins are ever made.

dblink is ugly and if you ask me a problem, not a solution (latency, network io, memory).

I'm not sure what you classify as "larger" but I'm talking tens of Tb, 1500 tables and over 2 million LOC in the application.

Instagram did just fine with postgresql with 27 million users. Many logical shards spread over a smaller amount of physical ones.


Instagram isn't very complicated to be honest so I doubt that's a good scalability example.

Skype's plan to support 1 billion users w/ postgres a few years ago? http://highscalability.com/skype-plans-postgresql-scale-1-bi... and http://www.slideshare.net/adorepump/small-overview-of-skype-...

Billions of records, 10,000 transactions per second. This was a couple of years ago.

Dunno if Skype is complicated enough for you.

I get the sense he'll ratchet up what "complicated" means until he wins the argument. :)

No just there is a difference between large and complicated.

EVE Online runs on a single SQL Server 2008 instance:


serving hundreds of thousands of users

> At peak hours as of 2007 the database handles around 2-2.5k transactions per second, which generate roughly 40,000 input/output operations per second on the disks.

That's not much...

They're complex transactions. Inserts against dozens of tables, selects from dozens of tables. We're not talking about glorified KV lookups here.

-scalability: I/O is often much more expensive than logic. With a well-normalized, well-indexed database, pushing logic off to the server often gives the query optimizer a chance to radically reduce how much data is retrieved. That can translate to less load on the disk controller, memory, controller, CPU, and network, and by extension more scalability. Of course, it might not work out that way, depending on the specific case. As usual, metrics rule all and slavishly following what should be rules of thumb at best is the worst kind of premature optimization.

- Compelxity, lock in: The DBMS is like any other software component: If you're actually using it to its capabilities, then it's trivially obvious that it should be difficult to replace, because there are a lot of capabilities you have to find substitutes for. True, you could avoid being coupled to your dependencies by choosing to reinvent wheels whenever possible instead. That would obviously reduce coupling. But bloating your codebase is generally not the best way to make your code more maintainable.

- schema: Pretty much any programming task is hell for people who aren't skilled at it. If you find that you're not very good at using the standard tools of a carpenter, that does not mean that the tools are fundamentally misdesigned. Much more likely, it means that you should be hiring someone else to do your carpentry for you. It's worth pointing out here that programmers shouldn't need to touch the schema in a well-designed system: They should be touching views and stored procedures that abstract away the implementation details.

You seem to be coming a very specific viewpoint. Not all things you build will require scaling out. Not all systems will become as complex as you seem to be envisioning. Not all systems will be around in 20 years, much less require the types of changes in 20 years that necessitate changing the datastore.

I view it as an optimization problem. Based upon what you're building, the aforementioned issues are more or less likely to occur. Optimize for what is likely.

If I'm building an official site for a popular product, I'd bet on all the above being true.

Some of what I build, even if 100% of people who could use the product would use it, will run just fine with a single database server that costs in the area of $8k. Planning for scale out is a complete waste of time here.

Most of what I build starts out simple. Most of what I build starts out with 0 users. Will it be popular, will it last, will it need to scale? We have no clue. For these, we keep it simple: modularize our code based upon features, use our datastores to their fullest within features, and keep in mind how we might scale things out (which may ultimately be incorrect) - if we have to. Usually we don't have to. I believe it to be a fair balance in this scenario.

Also, given your constraints, I don't quite understand how you could find a future datastore that is a better solution. How can you exploit better solutions and avoid lockin at the same time? If a better solution comes along in the future, it seems like you would be using features that would lock your datastore to that newer, better solution. However, you cannot do this because you need to avoid lockin.

I agree.

For a fresh start (greenfield) it's probably fine.

Unfortunately for 90%+ of IT products, it's not a greenfield. That's where the problem comes.

Scale up is damn expensive. When you need that 64 core 96Gb machine to run it all on with 6 replicas will see what I mean.

Damn expensive, eh?

As it happens I bought a couple boxes that size just recently. Except with 256G RAM instead of 96G. They cost $8500 per shot. That's not even 2 months of salary for a single developer.

Don't underestimate Moore's Law.

That bit is cheap.

How much does the storage subsystem cost?

You know the whole stack of fully redundant iSCSI/FC kit behind it?

How much does the storage subsystem cost?

15x 146G SAS spindles are included with the price.

You know the whole stack of fully redundant iSCSI/FC kit behind it?

Why, yes? We do run a FC fabric and various DAS boxes for various purposes, but not on the DB servers.

I'm not sure what you're arguing about here, the figures (labor vs hardware cost) are not even in the same ballpark.

As a rule of thumb: If a piece of gear can save your team 1 month of work then it usually pays for itself.

Where do you but that from - I'm genuinely interested.

Our Colo-Partner acquires and assembles the parts for us (mostly SuperMicro).

In the US I've heard good things about ThinkMate. E.g. this is close to our configuration but with a different chassis; http://www.thinkmate.com/System/RAX_QS5-4410

If you are paying for your RDBMS (e.g. Oracle, SQL Server) and not using its features you are throwing your money away. Even if you are using a "free" database, if you're just tossing aside things like data types, permissions, referential integrity and check constraints, natural declarative handling of sets, and writing all that in your client application what happens when you want to put another front end on your data? You have to do all that again. Where's the hell then?

What happens when someone has to do a SQL update manually? Referential integrity can save you from shooting yourself in the foot there. SQL injection? Much less likely if you're using stored procedures with parameters as your client interface.

You have lock-in all over the place regardless. Languages. Frameworks. ORMs. Operating systems. All are decisions that are painful to change when something "better" comes along. You're better off making those decisions and then leveraging them for all you can get. RDBMSs have been around longer than many languages and most modern frameworks, and are still a very good, probably the best, general purpose solution for data storage and management.

We're paying for the support and insurance against failure, not the features.

You don't do manual SQL updates. Why would you if you have an ORM? You update the domain and it does all of that stuff for you. I only care about the domain which is the stuff that makes the money - not writing SQL.

ORMs don't suffer from SQL injection as they use properly parameterised prepared statements.

Regarding lock in you are correct, but the RDBMS is the most politicised and expensive component which is why yo uneed the flexibility.

I am in no way saying "RDBMSs are shit - NoSQL forever!".

"The DBMS is a black box"

The DBA you refuse to hire would likely disagree.

Which is why we don't like them :)

Databases are for putting stuff in when you don't immediately need it. The DBA mindset pushes that the DB is the hammer for all nails.

Your ORM is a black box. One you don't seem to understand on top. Also, ironically, you are pushing your anti-RDBMS agenda as a solution for all problems complete with information devoid attacks such as "The world changes pretty quick. Those who adapt quickly stay around."Among others.

You clearly are not in willing to reflect on the actual problems being presented to you. Instead you are clinging desperately to some sort of ideology that either you yourself came up with or someone sold you on.

Look at the basic problems. You are not solving them or making them go away by "abstracting them away." The reason is that these particular problems can't be abstracted away.

For example, you raise joins into the code that your programmers write instead of the RDMBS level. But you are still writing joins, they are just for loops on top of your ORM solution. Now you incur the cost of writing them manually each time and testing them, also dealing with ACID which almost surely you are not doing correctly. Or if you are, you are spending huge amounts of money to make it right. Even worse, your solution can't possible be as fast as the RDBMS solution since you need to go through all sorts of heavy abstraction.

You are not seeing the actual problems at all and not surprisingly demonizing the type of people who do understand them: DBAs (assuming we are talking about expensive professional DBAs).

Did I say I was anti-RDBMS? Not at all. I'm just careful. I still use RDBMS for everything - just differently and I avoid a fair amount of the tempting features. I'm not an advocate of NoSQL.

The ideology came out of necessity and 20 years of watching people fall down the same holes.

Joins - loops on top of ORM? Doing ACID myself? Do you understand what a non-toy ORM does? Do you understand the separation between the data model and the domain model? Sounds like you don't.

Let's take an example from SQLAlchemy. I understand this is an ORM you feel is decent.


Note the section on joins. Their example is simple, a mapping between users and addresses. The ORM here generates a join in SQL. You are not avoiding it. The only alternative to generating SQL with a join is to write the join yourself. Such a thing would query users and then either lookup the address via an index or do a nested scan over addresses if you don't have an index. There is no way around it. Conceptually, a join is a description of a result set that you want. If you generate it, you are doing a join in some way.

If you are doing full ACID with complex queries, then you still need to understand transactions and all the associated complexities. An ORM doesn't make any of that go away by using terminologies like sessions. Rather it limits you to some subset of SQL and schema design.

If you want to limit your use of SQL for portability reasons, you can still do that without an ORM. At least that way you are more aware of what is actually going on under the hood and can more easily identify problems.

KV stores are painful to use. Only huge applications that are DB-bound have the problem of scaling database and they usually have a lot of money. Also you don't need your replicas to be as strong as your master db server.

I agree. I'm not advocating KV stores.

Don't confuse the "RDBMS uber alles!" for all storage is equivalent to "RDBMS is always a fail." Certainly over the last decade or so there was a mistaken belief that all data should be in an RDBMS, even though they were flattened or required no relational algebra, but that doesn't discount the benefit of a tool that can handle such relations when necessary.

Redis solves a certain set of problems for me, Memcached as well. ElasticSearch solves another set, and RDBMS yet another. Square pegs for square holes, round pegs for round holes.

Huh? Your worried about complexity and lock in, so you want devs to replicate tried and true database features?

I can understand wanting to avoid Oracle lockin.... But Postgres?

There is no replication of features at all (backup/restore is fundamentally different - it exports the domain to disk, not the schema).

Well we could shift a deployment to Amazon RDS (Oracle or MySQL) quite happily in a couple of hours without having to rewrite everything because we used all the postgresql features...

I agree with you but I often question when people list "lock in" as an issue. How often do you see projects where they decide to move from one db to another? Even in cases where an ORM is used, making the migration process painless in theory, I doubt many projects take the risk and exercise their freedom of not being locked in.

I've only seen people move to postgresql (generally from mysql). I've never heard of people moving away from postgresql. Granted, that's a small sample size.

Etsy did. I don't think it was a good decision and I think the reasons they did had more to do with resetting from some bad architecture decisions, but they're the example I know of. http://codeascraft.etsy.com/2011/09/28/scaling-etsy-what-wen... I think the big lesson here would be: don't have separate DBA and Engineering teams.

On the other hand, I could name quite a few people who went the other way, like Urban Dictionary who run on our http://postgres.heroku.com service, too. (I don't normally name customers, but he recorded a video saying he liked us so I think it's fair game. http://success.heroku.com/urbandictionary )

If you look at most "corporate" IT platforms, they tend to have favourite database engines that they need to use. If you have DBMS portability, then it gets one foot in the door straight away.

Also contrary to the norm here, some applications last a long time. The one I work with daily is actually 20 years old this year. In that time a lot has changed and in the next 20 years a lot will as well. It makes sense for long-lived applications to plan well ahead and consider paradigm and architecture changes.

This platform started in C++ on OS/2 with Oracle running on VAX/VMS at the back end. It went to Java/J2EE with an Oracle back end in the early 00's. It's now SQL Server and .Net.

You have to admit that a 20 year old app with over a PB of data and millions of lines of code is not exactly typical. For most apps the benefits of a structured database outweigh the costs.

Interesting to know what abstractions were found useful when migrating from VAX/VMS to Java/J2EE to .Net.

Hibernate and nhibernate.

I'd like to know the reason that was downvoted...

On the other hand, SQL lets you write concise and efficient queries that run without lots of round-trips between the db and whatever machine you're accessing it from. ORMs force you to write less expressive code that is less efficient, hardly a winning combination.

To be honest, if you have a good ORM, it's better at writing SQL than you are.

Compare to compilers being better at writing machine code...

- schema. Maintaining schema and dependencies is hell on all but a small system

When you have the tools you have the power.

Being "locked in" to an opensource database doesn't strike me as some kind of unbearable death trap.

I disagree with everything you say here. I seriously doubt you've been using databases for 20 years. Which databases have you used? How many hours experience do you have with each of them? Just because some people do it wrong doesn't mean you should throw out the baby with the bath water! In every stack I've used the database has been the most robust and amazing piece of technology and it's been an absolute dream come true for every project I've been involved in. (SQL Server / PostgreSQL - 10,000 hours experience)

It is robust and amazing. I agree but if you push it to the limits it will bite you on the arse.

Probably about the same as you on just SQL Server and half as much again on Oracle...

If you push ANYTHING to the limits it will cause issues. I don't see how that's an argument against properly utilizing an RDBMS.

The one of the biggest advantages of PostgreSQL is GiST (Generalized Search Tree) which is based on the theory of indexability.

> One advantage of GiST is that it allows the development of custom data types with the appropriate access methods, by an expert in the domain of the data type, rather than a database expert.


> Traditionally, implementing a new index access method meant a lot of difficult work. It was necessary to understand the inner workings of the database, such as the lock manager and Write-Ahead Log. The GiST interface has a high level of abstraction, requiring the access method implementer only to implement the semantics of the data type being accessed. The GiST layer itself takes care of concurrency, logging and searching the tree structure.

> [...]

> So if you index, say, an image collection with a PostgreSQL B-tree, you can only issue queries such as "is imagex equal to imagey", "is imagex less than imagey" and "is imagex greater than imagey". Depending on how you define "equals", "less than" and "greater than" in this context, this could be useful. However, by using a GiST based index, you could create ways to ask domain-specific questions, perhaps "find all images of horses" or "find all over-exposed images".


If you are already using PostgreSQL though have not known about this fact, I highly recommend you to learn about GiST. It is the most powerful feature of PostgreSQL as I know.

If you are doing anything serious with GeoSpatial, you should be using PostgreSQL with PostGIS.

Beat me to it, so upvotes :)

Let me pile on a big one: Stored procedures in multiple languages:

* python * perl * PHP (though I could not get it to work on 9.1)

And JavaScript! We're putting this to great use with the latest SproutCore/Blossom project I'm working on.

Hitoshi Harada and Andrew Dunstan are currently working on making it ready to put into production. Also, he's working on backporting the new JSON datatype to 9.1 as an extension!

PostgreSQL is awesome (I really love it), now let's do some complaining:

* I'm not fan of doing backups with pg_dump. I think it would be a lot more awesome if I could tell the database to back itself up in-band. Even better, I would like to be able to ask the database to schedule its own backups, instead of having to set up my own cron jobs (or equivalents). Commercial databases can do that.

* The sad state of libpq (the C API). Libpq is where PostgreSQL really shows its age. It's very cumbersome to use and can't do (or can't do well) a bunch of things the raw protocol is capable of. Since almost all PostgreSQL bindings are built on libpq, none of the bindings support these features either.

* It would be cool if it was possible to kill sessions in-band (other database servers can do that). With PostgreSQL, the only way to shut down misbehaving sessions is sshing in to the database server and doing a `kill` on the process of the session.

* The infamous COUNT(*) fiasco.

There is no fiasco in count(*), it is THE general behavior of any MVCC database. Try the same with MySQL InnoDB, you get the same result (or worse). MySQL MyISAM DB gets the result fast because it is non-trasactional. To get approximate count very quickly on Pg, you can use -- select reltuples from pg_class where relname='<table_name>';

Indeed, in the benchmarks I have seen PostgreSQL is faster then InnoDB at running count(*) over entire tables but slower when a indexable WHERE clause was added due to the lack of index only scans in PostgreSQL. This may change in 9.2.

Depending on how badly the session is misbehaving: http://www.postgresql.org/docs/9.1/static/libpq-cancel.html or pg_cancel_backend(pid int) or pg_terminate_backend(pid int)

Oh that's very cool, I din't know about that one. I guess this item can be removed from my list.

Internally there is also an interesting mechanism: when one connects, the backend hands you a "backend key", which is a 64 bit number. The client is intended to remember that number. Cancellations are processed by sending a cancel request to the server -- which involves a special new connection and a different first packet -- including that secret number. As an interesting security precaution, absolutely no acknowledgement or response is made to the cancellation request.

libpq knows how to do all this.

The reason for this is that to avoid inspecting the connection for new instructions during query processing, acting under the assumption that cancellations will be rare.


* Check out wal-e (https://github.com/heroku/wal-e), we wrote it and we manage 250k postgres databases with it. :)

* libpq does basically suck, but go has a pretty great native implementation. suggestions are welcome for improvement, patches are even better.

* pg_cancel_backend() plus select-fu on the `pg_locks` table joined with `pg_stat_activity` = joy

* fast count(*) is possible via index-only scans in 9.2

> fast count() is possible via index-only scans in 9.2

I'd like to expand on this. It will be faster, but still involves an O(n) over the qualifying data, but it'll be a lot less data, particularly for tables with wider rows. To some extent this problem is fundamentally unable to reach a small O(1) without introducing more lock contention and expense that many people don't want to pay continuously and across every possible query or dimension that can exist (in the latter case, it is clearly impossible given the existence of arbitrary WHERE clauses), but I think it'd be nice if people could add a new physical construct whereby count(*) queries on certain dimensions can be made faster at the cost of lock contention or resource consumption, not entirely unlike indexes.

As others have pointed out, there is no such "fiasco". However, if you want to simply count the entire table and don't care if the number is exact within the current transaction, you can cheat and do:

    select n_live_tup from pg_stat_user_tables
    where relname = 'your_table_name'
This will access the internal statistics about how many tuples are in the table. It ignores the transaction's isolation level since it's a global value, and it may be a few milliseconds out of date, but it's otherwise correct.

Great points. I'd love to see someone tackle how to get a corporate environment to consider Postgres over MS-SQL. We weren't able to, due to MS-SQL's great amount of "out of the box" functionality regarding automatic failover, mirroring, clustering, etc. PG of course can do all of this but to convince the DBAs to throw away their point-and-click interfaces so that we could start writing custom scripts and listeners from scratch wasn't really feasible.

I don't have an answer to the issue of getting PostgreSQL in corporate environments - but I did run into a couple other hurdles in trying.

1. It's surprisingly difficult to find an experienced PostgreSQL DBA who wants to work in a corporate environment.

2. It's even more difficult to get a large corporation to pay a reasonable salary (why would we pay more for a PostgreSQL DBA, when we can just throw that money at Oracle/Microsoft and get a cheaper DBA with all the certs...) which might entice an experienced PostgreSQL DBA to join the organization.

Right now, it's incredibly difficult to find good PostgreSQL talent. One of those reasons is because a lot of postgres skills are interchangeable with MS-SQL, and Oracle (or insert enterprise RDBMS of choice here), and they pay much better for less complex work.

I would guess that #1 is caused by #2... does #2 mean that Oracle+DBA (or MSSQL+DBA) is cheaper than a Postgres DBA, or just that the database and the DBA are generally paid for out of different department's budgets?

I think it's correct that (2) directly influences (1). And I think that most corporate groups are willing to hire an Oracle/MS SQL DBA with small amounts of experience but the carrying the right certifications.

To expand a bit, IMO - that scenario doesn't really exist for PostgreSQL DBAs, you are either experienced or inexperienced (with no weight-carrying certificate to complement credentials). Hence the corporate environment won't hire an entry-level DBA who has no certs to validate the hire, and they won't justify the seemingly added cost of an experienced PostgreSQL DBA.

You could always hire someone like command prompt to do it for you (i'm pretty sure they already have all this stuff pre-written).

heh..actually..it's the first thing on their product list: http://www.commandprompt.com/products/hot_standby_drbd/ ... they also do clusters, etc.

You won't have much luck. MS based corporate outfits like to hire ready trained monkeys so you will get very little buy in from management.

I've taken 5 years and several critical arguments to get a couple of MySQL instances in one of them to run some internal stuff. It's a start at least.

repmgr is one option but its still command line. In my opinion Postgres is more likely to gain traction in Unix environments where it can replace MySQL pretty much anywhere and Oracle in quite a few use cases.

I like Postgres, however in my experience, where money is not a consideration, Oracles RAC with its (fairly) easy multi node DB clusters is often the competitive advantage that prevents Postgres from being used more widely in the enterprise - maybe that could be something that the Postgres team look at replicating.

Ignoring the rip on 'point and click' UIs, MSSQL provides a lot more out of the box that PG does not do. MSAS, SSIS, and SSRS are 3 big ones right off the top. For data analysis the plugin that lets Excel be a pivot table UI backed by MSAS is very cool.

we could start writing custom scripts and listeners from scratch

Let me explain the DBA mindset: "when this falls over at 2am the day before <major, non-negotiable deadline>, who is responsible for fixing it?"

Now I am not saying Postgres is flaky; I am saying that the DBAs operate under a different set of constraints than developers perhaps do. One common scenario is that devs are incentivized on new features delivered and DBAs/SAs are incentivized on uptime.

I know this isn't a MySQL versus PostgreSQL thread, but I was bitten by MySQL today...

The issue: I ran a SELECT with a WHERE against a column that is an "int(11)", but I meant to run it against another column that is "varchar(255)". Something like: SELECT * FROM table WHERE wrong_column = "1abcdefghijk";

Somehow MySQL cast a "varchar(255)" to an "int(11)" with a value "1" without telling me. WTF? My result set was approximately 1M rows. I expected 1 row, so knew the result was wrong... But what if I had expected more than 1 row? Then the bizarre results from MySQL would have appeared to make sense.

Maybe there's a deeper reason PHP and MySQL are so closely linked.

Funny: I thought the exact same thing as I was writing my comment...

This is my least favorite MySQL misfeature, and it's something that users migrating FROM MySQL often rely on in their application and get surprised/angry when they encounter errors rather than silent successes.

List of reasons is so short...

For example since 9.1 Postgres support foreign data tables - other data sources is available for user as regular table; there are many drivers: MySQL, Oracle, external pg server, and even plain files.

Since 9.0 there is support for per-column triggers.

Both Postgres and MySQL are great. In history, Postgres emphasized more on feature development instead of performance, while MySQL took the opposite approach. It depends on your engineering and operation requirements to choose which one to deploy. Most of OP's points, however, need to have further consideration, IMHO.

- "While replication is indeed very important, are users actually setting up replication each time with MySQL or is it to only have the option later?"

Replication is not an option. It is a must-have for any serious products, both in scaling and in operation.

- Windows functions: They are wonderful and I love them. But it is not an important factor at all.

- Flexible datatypes: True in certain scenarios. It allows to create certain types to map a business object with unusual requirements. Otherwise, the requirements have to be implemented in application logic. But data type of Array? IMHO, use of Array data type in relational database usually means there are some issues in data modeling and design. The performance is another issue. If you have to use array, consider NoSQL options.

- Functions. Although I am a database architect, I use database functions only for simple poking around. For any serious work, I prefer to write application code. It is easier to maintain, to test, and to extend. I don't want to have a big muddy ball. I prefer to have structured, well-decoupled application code.

- Custom Language. Same as above. Why not just write application code?

- Extensions: Geospatial is awesome. It is better than that in MySQL. But for many extensions, I prefer not to do the computation in database. Databases are usually the most expensive resources (in operation costs) and are usually the performance bottle neck. It would be better to have the computation in app servers. It is easier to scale and it is cheaper.

- Custom Functions: see above.

- Common Table Expressions: Recursive queries are awesome. I t is nice to have for ad-hoc data inspection, but I wouldn't encourage its usage in application code. The risk to use it wrong vs the benefits are high if used in application code.

> - Windows functions: They are wonderful and I love them. But it is not an important factor at all.

I disagree or would choose to qualify a number of the things you wrote, but I'll choose this one because I think it's the most interesting:

While window functions are seldom (but not never) used by common application workloads, but I find them pretty important for lowering the barrier of what one-off queries I can write (hence, more questions are economic to answer) to figure out some statistics that assist doing business or solving strange problems. The alternative is to be forced to make a greater number of decisions with less supporting data.

This can be applied to any labor-saving device when it comes to reporting, but I think window functions make the cut for me overall, and it's great they are there.

You left out one of my faves. Parent / child tables (ie tabular inheritance). Makes time series stuff so much nicer as pruning over time can be performed with truncate as opposed to "delete from".

ActiveRecord 4.0 will support postgresql custom types, by the way. https://github.com/rails/rails/pull/4775 Thanks @tenderlove!

Call me lazy, but this is why I don't use Postgres: Amazon RDS is MySQL

Its that simple. I can spend a lot of time setting up and monitoring a Postgres cluster, or I can spend about 5 minutes configuring a MultiA-Z RDS instance. However, the moment Amazon offers Postgres on RDS instances, I'll make plans to switch.

So use enterpriseDB's postgresql service. It is the same deal as RDS, hosted on amazone, only using postgresql and operated by postgresql experts.

Also: fast (having a properly smart query planner) and, in my experience, fantastically reliable.

Is there a software that balances databases on different servers just like mongodb sharding works, by range? So for example i have a SaaS and each customer has 1 database. And the databases are migrated from one server to another automatically? If not is there something similar? Thanks

Not yet. http://postgres-xc.sourceforge.net/ should be released soon.

I wouldn't expect too much from XC in 1.0 state. When I last looked in on it there was still a ton of work to do before it was really operationally ready to put in production.

I'd love to hear from anyone who has managed to deploy the beta.

Is there a good inexpensive hosting option for postgres aside from just running it yourself? Sort of like RDS on Amazon? Alternatively a DBaaS host for postgres?

Something like this, maybe? https://postgres.heroku.com/

With a reserved instance, RDS starts with $20/month for the small, $92/month for the large, and it's easy to change the size as the need arises. Heroku Postgres starts at $200/month, which is more than I'd want to pay for an early-stage project that I'm still experimenting with.

Hey, that's a thing we made. Happy to answer questions if you have any.

Two I particularly like:



Oh, and yeah, a BOOLEAN datatype. That's one in the eye to Larry Ellison.

What would make it absolutely the bee's knees would be a decent data language on top of that lovely data engine. Something closed over composition, with a simple, orthogonal syntax. NoSQL in the truest form — not a blind rejection of Codd's algebra, but something LESS DUMB STUPID THAN THE WRETCHEDNESS THAT IS SQL.

I've been agitating for an API that would allow developers to build their own query languages on top of Postgres alongside of SQL. Some day.

Yes! Please! The time for saner alternatives to SQL for querying relational databases is long overdue.

Is sad that Postgres completely replaced QUEL with SQL, it was not the best language ever, but some competition and alternatives in this area would be good.

Probably the sanest alternative around today to SQL would be Date's D (not to be confused with the "C++ on steroids" D language), see: http://en.wikipedia.org/wiki/D_(data_language_specification) and http://www.thethirdmanifesto.com/ (evidence that database types should never be allowed to design websites :))

Two points:

- using pg'a arrays is using SQL against its grain. If you need multiple phone records per user just create a user_data table with three columns (user, key, value)

- the number one reason to prefer PostgreSQL over MySQL is just correctness. Every second query I write in MySQL get a wrong answer.


- using pg arrays is awesome, and for many access patterns is more performant and equally normalized - MySQL is mostly quite correct, though I disagree with some of the implicit casting of values to integers and some nuances of GROUP BY. That said, operationally speaking, I'd say MySQL replication is less good and can get out of sync in awful ways, there are fewer awesome features (I made some slides: http://love-your-database.heroku.com/), and the storage engines mean you have to choose between useful features and like, not losing your data, man.

> some nuances of GROUP BY

I won't call them nuances. SQL tries hard to be very close to set theory, and not just for the sake of it. In set theory, you have no nuances, you have only one possible answer to a question, and MySQL is like a bad student how do not understand why, in maths, you should never affirm something that is not 100% sure and proven.

It's not true that using arrays is using SQL against its grain. If this is an arbitrary list of phone numbers then an array is fine -- the domain being modeled is "list of phone numbers" not "phone number."

If you need to query specific phone numbers, then you should have separate columns whose names describe their roles (e.g. cell_phone_number) rather than the EAV monstrosity you proposed. :)

> rather than the EAV monstrosity you proposed. :)

I would really like to know what is monstrous in the "Entity–attribute–value" model, I scanned the Wikipedia article without finding any obvious problems.

I had, however, many many problems with separate columns for each values: I have to handle tables with over 600 columns, for example "fixed_phone", "fixed_phone_modified", "fixed_phone_modified_by", "cell_phone", "cell_phone_modified", "cell_phone_modified_by", etc. With an EAV variant, I'll have "entity", "attribute", "value", "modified", "modified_by", "deleted", and it will be much cleaner, or prove me wrong.

All of those modification times/users should be in a separate audit trail table, which likely will look like an EAV table. That's OK because it's tracking the history of updates, e.g. (time, table_name, column_name, old_value, new_value, changed_by).

The current (rather than historical) values should not be in generic EAV tables because putting data of different domains in the same column subverts both the relational model and the DBMS's indexing capabilities.

If the point is to label telephone numbers with a role, then a person_id, phone_number, phone_number_label table should be fine.

Please, no -- I really dislike it when an application decrees that I Know What Types Of Phone You Can Have and You Shall Have Only One Phone Number of Each Type.

CiviCRM has an interesting scheme too: it lets you say that a number is a [Home|Work|Other|Whatever] [Mobile|Fixed|VoIP|Whatever] number.

Sounds like an array of custom data types with phone_number, phone_number_label columns, then. PostgreSQL can do that.

I think applications overspecify things like cell_phone_number because they would like to query for all cell phone numbers, and because they're stuck in the obsolete mindset that 1NF can't include arrays.

I'm pretty impressed that something of this complexity can be implemented entirely in C. People complain about the lack of tools for building higher level abstractions in C but this shows that with the right code discipline it can scale quite high.

Great article.

As someone coming from a Maths background - instead of pure CS - I'd never heard of some of the features you highlighted. If you consider expanding the article, please could you put in a line or 2 explaining what a feature is?

I love that there are so many options for writing stored procedures. There is the traditional pl/pgsql (matches quite closely to Oracle's plsql) but also Tcl, Perl, Python, Ruby, Lua, and others.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact