I'm using the AWS stack for http://www.soundslice.com/ and I've been using MySQL instead of Postgres, purely because my hatred for MySQL is less than my hatred for being a sysadmin. It was a tradeoff, and I miss Postgres dearly every time I use MySQL.
This new Amazon offering solves that.
I wrote a little more about my AWS setup here: http://www.holovaty.com/writing/aws-notes/
Last time I badmouthed Heroku on here I got a reply from one of their employees asking me to fill out support tickets for errors I was getting in their apps......
O_o they're watching.
> RDS 4XL vs Heroku Mecha: 40% cheaper on demand, 74% cheaper 1 year reserved, 82% cheaper 3 year reserved... with more features and capacity.
If that's true, Heroku's Postgres offering isn't going to do well. "Wiped off the map" may be an overstatement, but not by that much.
& it would be louder if the rest could figure out where to put the custom DB URL parameter ;D
It's a shame that Amazon has some proprietary idea of a machine image, but my ideal hosting scenario is just App Server Machine Image + DB + any Machine Images required for extra services. Shouldn't have to choose between several paid services that are just different APIs into an app server component you should have direct access to.
I don't understand how the open source community can be so into Heroku given that it's basically just wrapping open source software & charging for it, getting away with it by saying they're charging for the admin UI or whatever.
GitHub just "wraps up" git and charges for it and I am happy to pay them. Setting up a git server is a royal pain in the ass. Setting up a wiki, issue tracker, etc to go along with a git server is even more of a pain in the ass.
Well done cloud services make life simpler, and many people will pay for that simplicity regardless of how the internals are built.
If you add 5 services & then to switch hosts it's not another git push. You have to re-configure each of the services. If you'd done this yourself all along & made machine images it would have been less convenient at the time, but probably cheaper & a good learning experience. There are many other hosting platforms that offer Linux boxes so you could move your whole App/Software layer to another of these companies without much trouble.
I'm not saying Heroku is evil or anything. Yes, they provide a good platform. But I think any web company with a significant customer base would benefit more from the cost savings & freedom of a purer platform than the conveniences of Heroku.
Plus, there are so many configuration issues with their services. I have auto-scaling setup with Adept and still I see these long request-queue buildups now & then. I get the feeling I would not have the same issues with an AWS stack where I have CPU usage monitors that are very transparent & all the networking is trivial.
I still use Heroku as an app server & don't hate it enough to up & move (though a large factor in this is I'm not the one footing the bill, the client is...) but anything I can easily get on to AWS is a no-brainer. Database is one of those things -- a couple clicks to scale up/down every year is all thats really required.
CONCLUSION (cuz I rambled too much): I think Heroku offers scaling/convenience but AWS is just so rock solid & cheap that you can probably just buy larger instances than you need (to compensate for scaling) and have much better performance at the same price. Then you just need to learn how to install your tools & take a machine image as backup. Plus there's a lot of value in learning how to work with machine images that goes way beyond hosting a web app.
Every minute I spend doing that is a minute I spend not doing things I enjoy. YMMV though.
I prefer the Amazon model because you can stick these all on one server & as long as CPU isn't pegged at 100% you're good. I agree tho I'd rather not spend the time figuring it out. For now I only use it for RDS & for services not available on Heroku (some Adobe streaming server stuff).
Vendor lock-in is when you write your software on Oracle or MSSQL and moving away requires you to rewrite your whole thing. It's not losing the convenience of "got push" for deploys and having to spend time moving off their hosted versions of open source software and configuring and hosting it yourself instead.
Accusing Heroku of practising vendor lock-in is honestly absurd.
And as a point of interest, I think these days its probly much easier to convert your database than it is to switch hosting platforms (well... in some cases).
What do you think stops you from switching? We had no issues at all - definitely none from Heroku.
We still use PostgresSQL from heroku because it is still a solid service and comes with niceties like dataclips. I should confess that I have not explored the Amazon PostgreSQL offering but I am happy with Heroku for databases at the moment.
This could be a nice fire under Heroku's ass to get more competitive.
FYI, it doesn't look your payment modal works on smaller browser windows: https://www.monosnap.com/image/UvJbxMQEwkzqLH5btNJ7a9G6Q ... no visible pay button, and the modal itself scrolls when you scroll the page.
Does it make sense to migrate to PostgreSQL, I don't have a lot of data as I'm in the early stage?
What are the primary advantages that PostgreSQL provides over MySQL?
Any advise/pointers is appreciated.
Perhaps you mistakenly insert "2013-10-32" into a date column. MySQL will silently convert this to "0000-00-00" (!!). Postgres will raise an error.
Perhaps you make an error in a transaction. MySQL lets you keep doing subsequent things in the transaction. Postgres treats the transaction as invalid and forces you to start over.
Perhaps you want to add a column to a table that has millions of rows. With MySQL, you'll be waiting a looooong time (see http://stackoverflow.com/questions/463677/alter-table-withou...). With Postgres, it takes about a second.
Of course, there are things you can do to make MySQL less horrible, and this is a generalization. But Postgres is just more respectful and more solid.
Oh, and PostGIS (Postgres' geo add-on) is by far the best open-source geospatial database. If you're doing anything with geographic queries, you need to be using it. MySQL's stuff is laughable in comparison.
Context: I've dealt extensively with both databases, both from the perspective of a framework author (Django) and a developer making products. I've used both databases on and off since 2001.
The thing that kills me with MySQL (technically it's with InnoDB-based storage enginges in MySQL) are the subtle quirks. Like the thing where it insists on writing temporary tables to disk if you do a query that selects TEXT or BLOB fields. Even if they could have easily fit in memory, it's not smart enough to be able to determine that with variable length fields. A very non-obvious performance killer unless you're specifically looking for it.
I'm not sure what the status of TEXT-fields are in mariadb:
It took me a long term to learn this one. I suppose if I'd read the MySQL docs from cover to cover I would've found it earlier.
One other problem that popped up was ignoring indexes on tables with TEXT fields during joins, which was a planner weakness. I understand it was fixed in 5.6; I'm waiting for the Percona version to stabilise before I upgrade.
> Perhaps you mistakenly insert "2013-10-32" into a date column.
Only with ALLOW_INVALID_DATES sql mode set. As of 5.0.2, the server requires by default that month and day values be legal, and not merely in the range 1 to 12 and 1 to 31.
> Perhaps you make an error in a transaction. MySQL lets you keep doing subsequent things in the transaction.
If you care about transactions you should have STRICT_TRANS_TABLES on.
 - http://dev.mysql.com/doc/refman/5.6/en/server-sql-mode.html#...
 - http://dev.mysql.com/doc/refman/5.0/en/server-sql-mode.html#...
Understand, MySQL-ers, I know that your DB has been patched A LOT over the last decade, but running with something that did not support ROLLBACK (nor isolation), nor foreign key constraints???
That the MySQL team even thought they could call such a thing a database terrified me, and made me quite scared to ever trust their judgement. (viewing the comments for this article suggest to me that playing those odds was the right thing to do, as well, rather than simply "prejudice")
That, and at the time, PostGreSQL was supporting stored functions that fit anywhere in SQL statement syntax that the return type matched the needed expression type (scalar, vector/row, matrix/table), and could be written in a PL/SQL work-alike OR alternate loadable languages, while MySQL had no stored procedures at all.
That, and at the time (already), PostGreSQL supported OOP-ish "extension" tables that extended other tables with extra, specialized, columns. Rows in the specialized, subclass, table would show up in the generalized, superclass, table (sans extra columns), but the subclass table would only show the relevant specialized type rows, with non-null columns where needed. Other DBs required you to join 2 tables and manage joins and a view to do this.
Putting SQL syntax on top of an ISAM engine just makes it dBase with awkward syntax, and that's not an environment I wish to revisit. (I know that InnoDB is constantly twiddled to suck less, but that back-end was extra back in the day, yes?)
I think the biggest advantages are trust and flexibility.
Trust, because PostgreSQL language semantics are cleaner, closer to the SQL standard, and less likely to surprise you. And postgresql just has a good reputation for traditional engineering quality.
Flexibility, because it offers a lot of APIs and features that can be very useful to adapt your application as needs change. You don't have to go crazy with features, but even simple apps can benefit a lot from prudent use of them -- a trigger here, a foreign table there, or LISTEN/NOTIFY (for cache invalidation) can just save you a huge amount of work and make the system more robust overall. The extension mechanism is very powerful.
Before making any big decisions, do a trial migration and see what you think.
- More robust, fewer crashes, less corruption of data
- More features (JSON data type, partial indexes, function/expression indexes, window functions, CTEs, hstore, ranges/sequences/sets, too many to list)
- More disciplined (doesn't do things like auto-truncate input to get it to fit into a column)
- Not owned by Oracle, it's actively developed, regular major release schedule, etc.
- Better Python driver (don't know about other languages)
- Choice of languages for database functions/procedures (Python, JS, etc.)
- Better partitioning support
- Better explain output, explain analyze, buffers
- Multiple indexes allowed per table in a query
My group is developing all new applications in PostgreSQL and we would like to migrate our legacy apps away from MySQL.
Edit: You can also check phppgadmin at http://phppgadmin.sourceforge.net/doku.php
The basic answer is (from what I can tell) that postgres ships with sane defaults. Lots of little gotchas exist in mysql that experienced mysql dbas know to deal with and avoid. On the more sophisticated side, pgSQL seems to have a focus on being "Really Awesome For Experts" where MySQL seems to be focusing on "Being Easy to Get Started".
I've been using pgSQL for my side project and for my "personal tooling" at work, and I can honestly say that it's just as easy for me as MySQL was for the same sort of things.
If MySQL works for you, there is no reason to change. Some people have quite a dependency on Postgres (hstore, JSON, transactional DDL, pubsub, PLPGSQL etc.)
I think there are two times when it makes sense to consider a DB migration: (1) early on, when it's easy; (2) if you are in major trouble with your existing DB.
Our dependency on Postgres is more to do with it being a lot safer with our data. It doesn't silently fail, the transactions have a better failure mode, and as previously pointed out it handles ALTER in production a LOT better than MySQL.
The combination of safety, speed/efficiency in schema/index alteration, and its increasingly good performance are why we depend on it. The rest is just gravy.
Do some search and give it a try one weekend.. Trying it is loving it.
The MySQL startups tended to say "We love MySQL. We've gotten in the habit of taking an hour or two of downtime in the middle of the night every week to run all of our schema migrations, and we've had to build our process around that, but one we had it in place, everything's been fine."
The PostgreSQL startups said "We love PostgreSQL. We run schema migrations in real-time during the middle of our workday, and we don't have any problems."
1. An additional potential point of failure
2. The core software (a DB, in this case) can (and probably will) evolve independently of the third-party tool--thus introducing an additional layer of maintenance problems.
I'd argue further--and this is of course just an opinion--that such a basic feature as this ought to be supported out-of-the-box by anything that claims to call itself a "database" in the sense that MySQL does.
It's an industry-standard tool and one of the most respected forks of MySQL. It's not a layer of maintenance problems and they sell commercial support.
Their customers include the BBC, Yelp, and Cisco: http://www.percona.com/about-us/customers
Also, for the record, Oracle added online schema changes in 5.6.
This has saved our bacon a number of times. The only kind of DB-related downtime we have is when we're doing a Postgres upgrade.
Over the years, Postgres has made up those shortcomings, so it retains its respect among database wonks and now other people can easily use it too, so you see a lot of people adopting it over the past few years.
But in overall usage numbers, I would be surprised if it were anywhere near MySQL. It takes a long time to overcome that kind of inertia.
There's still a bit to do on the auto-failover front, but that may end up being more of a third party undertaking. Postgres has failover facilities included, they just have to be driven by something external right now.
If you count all the crappy shared hosts, XAMPP local installs, and hobbyists setting up their own little VPSes, perhaps.
Using a single, unreplicated database instance in production for anything serious is bizarre. Failed hardware is hardly unheard of.
It is incredibly common. Go check out a thousand businesses running mysql, you'll be able to count the ones using replication on your fingers.
>Failed hardware is hardly unheard of.
You don't need to use mysql replication to deal with that. Even the crappiest low end SAN storage devices do it vastly better than mysql does, without any of the bugs and problems mysql replication has.
Replication can be done off-site so you at most lose a couple of seconds worth of data. I do not know anything about MySQL's replciation but my trust in PostgreSQL's is very high.
Which is why it is being replicated, like I said.
I'd wager that a huge number of MySQL users are using replication. Running a single database in a production environment is totally unacceptable and a major business continuity problem.
You don't need to use mysql's broken replication to get HA. Hell, I've seen more people (wisely) using DRBD for that than using mysql's replication. But even entry level storage devices do replication.
I would say that if you are just starting out and you already know MySQL and you are proving out your MVP, MySQL is still relevant.
Once you have a stable product and you need a better database, PostgresSQL is a good move up.
Strategically, because PostgreSQL is not just trying to be a free database checking off features. It's trying to be something better -- lots of innovation that is having a bigger impact on what developers and DBAs can do.
I recognize I'm used to MySQL but I was under the impression that because PostgreSQL has a similar syntax (SQL) it wouldn't take too long to pickup.
I've had better luck learning how to use the psql command/shell than mess with pgAdmin, at least in certain cases. To take your example of displaying tables, psql in and type: \dt
The rest is just a search away.
Edit: It seems they are working on it but I i'm not sure when we can expect a release: http://stuconnolly.com/blog/sequel-pro-postgresql-support/
Postgres is very powerful and can do a lot. I think I just need a good tutorial and some time to get used to it.
And even though I love PostgreSQL (and was working with all major databases), I still think that the real winner is sqlite :)
I never really understood why you'd want to use a separate dbms if it couldn't do proper constraints, triggers, transactions and materialized views for you -- then it starts to feel like a lot of wasted effort.
So for many of the use-cases where MySQL might have been appropriate, we now have sqlite, mongodb, memcache/redis and a few others.
Personally I don't really see any reasonable use-cases for mongodb, just as I didn't see many reasonable use-cases for mysql -- not that you couldn't build stuff on top of it, just that it wasn't a very good idea.
5.6 had some pretty critical improvements, especially to the query planner.
Safety is annoying. Safer system usually generate more errors to prevent accidental mistakes. Extremely safe system prohibits even booting up the product if it's not been properly configured. It simply has much more annoying safety net. If you care the safety, this kind of annoying errors are good sign to you. But if you're newbie, this is just a big obstacle, which makes stiff learning cube.
Really. We still see many people who hate to use safety belt. Because it's annoying. And everybody did before benefits of safety belt were widely known and accepted.
Also, number of users says nothing about how well the product works. Usually, cheapest product takes biggest user-base. MySQL is cheapest to start due to lack of safety. Search this thread for the name "natural219". And see why he chose MySQL over PostgreSQL. I believe that's why most people started MySQL at first.
And surprisingly, there're so many people really don't care data safety. (maybe not that much surprise. we always see those people in TV…)
Plus Wordpress has a hard requirement for MySQL, and like it or not a huge number of projects still use it as a framework.
RDS removes remarkably little of the pain of running a database instance (most of the pain that's removed is just the up front setup), and ends up adding a lot of inconveniences for your day-to-day operations.
Also don't count on their replication as your backup.
OK. That's two suggestions, but I think it's OK.
This statement seems like a complete non-sequitur: does anyone credible recommend replication as a backup strategy? It's like dinging a server vendor because you can't rely on RAID as a backup plan.
Also, you can't take advantage of replication (aside from their own read replicas within other RDS instances) or binary backups. Anything that requires access to the machine itself is impossible except through Amazon's support channels.
Its gotten better than it was, but it's still a headache to monitor and manage as a DBA.
The guide does mention using a read replica to replicate from, an intermediate RDS instance between your offsite, but I've had no trouble replicating directly from the master instance.
One thing they don't cover is replication over SSL. AWS has failed to mention this shortcoming in the docs last time I checked. To have MySQL replicate over SSL, the master and slave both need an SSL certificate signed by the same CA, which would require you to obtain a cert+key signed by the AWS RDS CA.
Of course you have the option of tunneling the replication connection into a haproxy or stunnel running on an ec2 instance, but that has it's other shortcomings. You can't use the ELBs, since you can't register the RDS instance with an ELB.
With needing a DBA, even if you're still on RDS? I don't see what that has to do with PCI compliance.
With RDS only removing the up-front setup pain, at the cost of ongoing maintenance... as someone who is also familiar with PCI (and HIPPA, and DOD) compliance, I respectfully disagree with your disagreement (well, if you're working with DOD, AWS isn't even an option to begin with).
Given the choice to run my own instance of a DB on a AWS instance (which carries the same certifications as RDS), and use RDS, I will run my own instance every time. The setup just isn't onerous enough to justify the daily productivity cost.
I disagree with the notion that RDS removes remarkably little of the pain of running a database instance.
Yes, there are projects where RDS is not a great solution, but it definitely simplifies a lot of stuff. The notion that it "only removes up-front setup pain" is silly. If you manage your databases correctly, up-front setup pain should be the vast majority of all your basic admin operations. The "at the cost of ongoing maintenance" part is a real head scratcher for me. RDS basically gives you everything you'd have with a DB on an AWS instance except a local login, which one tries to avoid using like the plague anyway.
Let's look at a common problem that DBAs are typically given: "The Database is slow!". Let's troubleshoot this ficticous problem on RDS:
Am I being affected by a noisy neighbor? Can't tell; contact Amazon support.
Can I look at top to see if the load is high on the box, and potentially why? No. I can look at historical trends, but not with enough granularity or information to be useful.
Can I look at the disk iops to see if there's any kind of problem there? No. Complete black box here; contact Amazon support.
Can I look at the slow log? Kind of. They'll push the slow log data into the database for you to query, but then you can't use tools to do aggregate tracking.
Pause for a moment for a quick MySQL RDS tip: pt-query-digest has a mode of operation that lets you do a processlist every 1/100th of a second and turn that into a pseudo slow log, which does work for RDS.
pt-query-digest --processlist h=10.0.0.1 --interval=0.01 --output slowlog > /tmp/fake_slowlog.out
Can I kill queries? Yes, using a stored procedure. Can't use any of the existing toolset around this (like pt-kill, which can help keep poorly written ad-hoc queries from getting out of hand).
So, after many hours swapping emails with Amazon support, we've determined that we're actually spending a lot of time waiting on malloc mutexes. The internet says that using a non-default version of malloc will help with that - can I do that?
Nope. You're stuck.
Other things you can't do:
* Offsite backups that are in any form but MySQL dumps.
* Take advantage of new index types and compression support from TokuDB.
* Zero downtime failovers (We were able to help someone fake this; it was a PITA).
* Cross-region replication.
* Automated failovers using a reputable tool (MMM, MHA, etc).
* Access the error logs.
* Run multiple instances on one machine.
* Alter the disk elevator (hopefully they're using something sane, like noop, but we'll never know)
* Alter the kernel swappiness.
* Troubleshoot crashes.
* Monitor and alert on a machine's vitals.
Now perhaps I'm just being a power-hungry admin, but these small things matter. They are the difference between a snappy DB which scales beautifully to 10,000+ QPS, and a sluggish DB that causes you to move to bigger hardware, because it's the only option open to you.
Databases just aren't that hard to set up. Install packages, install config files, start the DB, restore from a backup file, restart the DB, and you're golden. If you're particularly paranoid, set up the selinux contexts (I'd bet dollars to doughnuts that this isn't done on RDS instances), and create a security group that limits access to only the 22 and 3306 ports to your application hosts, and set up individual users.
This is particularly simple when you use an orchestration tool; I recommend Ansible personally.
Sure, you can. Spin up multiple RDS's and benchmark them.
> Can I look at top to see if the load is high on the box, and potentially why?
If you ar using top to monitor your box, you are already screwed. There is lots of support for remote monitoring.
> Can I look at the disk iops to see if there's any kind of problem there?
Disk iops are part of the built in monitoring and metrics provided with RDS.
> Can I look at the slow log? Kind of. They'll push the slow log data into the database for you to query, but then you can't use tools to do aggregate tracking.
If only there was a tool that could extract records from a database and compute aggregates...
> So, after many hours swapping emails with Amazon support, we've determined that we're actually spending a lot of time waiting on malloc mutexes. The internet says that using a non-default version of malloc will help with that - can I do that?
>Nope. You're stuck.
MySQL sucks. RDS provides no means to make it any better. Fortunately they do now provide PostgreSQL.
> Offsite backups that are in any form but MySQL dumps.
You can do that by replicating to an external MySQL server and doing whatever the heck you want with it.
> * Take advantage of new index types and compression support from TokuDB.
Yup. Until today it was also really hard to take advantage of different engines found in PostgreSQL. ;-) This is a totally different product.
In general, all of the stuff you are describing are features, not things that cause maintenance complexity. In fact, manipulating those things causes maintenance complexity.
> Databases just aren't that hard to set up. Install packages, install config files, start the DB, restore from a backup file, restart the DB, and you're golden.
I had no idea PCI compliance could be that simple. ;-)
> This is particularly simple when you use an orchestration tool; I recommend Ansible personally.
Yes, orchestration tools, if set up properly are exactly how you'd want to do this kind of thing. If you already have all that setup to manage your database, RDS is likely not going to help.
You need to be running dedicated instances inside of VPC with your own DB install.
Administering databases is a full-time type of responsibility. Yes, you can get pretty sane defaults, and up and running without much difficulty with MS-SQL, and mySQL has been a defacto standard in the LAMP stack. That said PostreSQL has been a rock solid RDBMS. The commercial extensions for replication have been cumbersome and expensive. Here's hoping that AWS will grow/expand the replications options/features, and that they'll grow to include JS procs as that feature stabilizes.
> Database updates are made concurrently on the primary and standby resources to prevent replication lag.
Makes me feels better for setting up my own pg cluster on EC2 a week ago, which does allow reads from the replication slave. Plus, I can provision <1000 IOPS (provisioned IOPS is damn expensive with AWS), and get to use ZFS.
I don't actually know whether Heroku can failover across an AZ failure.
By default our followers and HA is automatically cross AZ. You also have an ability to create followers across region, but we do not automatically failover on those due to latency.
Heroku's solution is 2 to 4 times more expensive for the same type of DB, and RDS even allows for reserved instances to further lower the bill.
One area is we're focused more on delivering more guidance and expertise around what you're doing with your database, in addition to ensuring your database is healthy and running. An example of this is notifications that we deliver around unused indexes, where you may benefit from other indexes, or other places where you can quickly optimize your DB. This starts to free up a DBA to add higher value tasks or for smaller shops lets you get by longer without a need for a DBA.
Another big area is features we deliver on top of Postgres. This ranges from followers which all you to easily scale read traffic, or allow for your database to be replicated across not just AZs but also regions. There's the other spectrum of this as well including dataclips. Dataclips make it easy to share data in a simple way, as well as build richer dashboards by integrating with google docs, or quickly prototyping APIs.
If you're curious on various technical details we'll be documenting that soon but would be happy to correspond via email, craig at heroku.com
I understand that "forks" and "follows" are easy concepts, stats are cools, etc. but I personally wouldn't want to pay double or triple for that (and I'm not a DBA, so I feel the pain). Not that my word counts much as I'm on the smallest Heroku production plan, so I guess my bill isn't exactly interesting for this kind of discussion. In my case, I would say that even saving $25 off $50 can be offset by these additional niceties. But, I don't know what I would think if I were a customer paying $1000/mo.
Did you add disk? People seldom do, I can attest it's a big part of the bill...
> I don't actually know whether Heroku can failover across an AZ failure.
"Followers" have preferred another AZ for quite some time -- almost since the beginning -- and Heroku Postgres HA is based on similar technology. So, yes.
So pricing wise both double if you go multi-AZ.
I didn't claim to have done a detailed price comparison but a quick look at the small sizes which are currently the ones relevant to me.
No doubt about that. To date, Heroku's product model has preference for fewer, but common choices rather than more flexibility. Heroku's staff sees fit to give in sometimes, but coarsely speaking, that's pretty much how it goes.
Last I heard Heroku is hosted on top of AWS. Does anyone know if Heroku's cheapest Postgres plan is hosted on a dedicated AWS Micro instance, or do they buy large AWS instances and host multiple databases on each box thereby potentially providing more IO performance?
Not all Pl are available, and it misses the PL/V8 and PL/Python at least.
And it seems that all fdw (Foreign Data Wrapper) extensions are missing.
But it's a great start, I'm looking forward to try.
If anybody know if we can still access the WAL log then it will be very useful
EDIT: you are right, PL/Python is supported as well. I only read the "Language Extensions :PL/Perl, PL/pgSQL, PL/Tcl" part at the top.
But I was surprise to not see PL/V8 which is sandboxed
Here is a related whitepaper if you still want to setup PostgreSQL yourself: http://media.amazonwebservices.com/AWS_RDBMS_PostgreSQL.pdf
Edit: It does, see http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_P...
Now, comparing to Amazon, their '1.7 GB memory Small DB, 1-year reserved, multi-region' is around $28/month (with storage & transfer for my app no more $35/ month). The equivalent Heroku plan, Tengu (1.7 GB mem) STARTS at $350/month!!! Wow, not I'm really rethinking my hosting platform.... Amazon looks more attractive, even if I have to do a bit of sys admin for my web server/cloud server.
I expect this helps you.
I don't know why the MySQL/PG prices are different.
Heavy/Large/1yr(reserved, multiAZ)/Tokyo: PostgreSQL = $1548, Oracle = $2440
Seems reasonable price. And I don't care MySQL price because it's not an option to me.
If/when a Heroku RDS plugin for Postgresql arrives, competing benchmarks, a cost calculator, pros and cons would be very interesting, indeed.
Heroku pricing: https://addons.heroku.com/heroku-postgresql
Amazon pricing: http://aws.amazon.com/rds/postgresql/
Though it was free test version, I surprised to see my database is living together with many other people. (`\l`)
Your database is available publicly, and you have only minimal security.
This is awesome news! I hope we can move to it at work!
It's a form of vendor lock-in and you shouldn't support it.
Without replication it's impossible to migrate out of RDS without taking downtime (I've been through this and it was painful).
You're paying for the "as a service" part of PaaS, IaaS, SaaS, *aaS.
A lot of people talk about how poor the storage performance on AWS is - but this seems to offer provisioned IOPS up to 30,000 IOPS.
I'm curious what sort of hardware/setup that translates to in the real world? Do you find your own dedicated setups have more throughput?
And there doesn't seem to be much info on the network bandwidth between RDS and EC2 either.
PostgreSQL RDS micro instance falls to $0.009 per hour when reserved, while MySQL falls to $0.016 per Hour.
If it isn't a typo, Postgres reserved instances are 1/5 of the on-demand price, which doesn't seem correct.
In PG 9.2 there were only two json functions, but 9.3 introduced more
Every time I try to install PostgreSQL it fails. Every time I install MySQL it installs successfully with no problems. Actually, that's the extent of my experience with it, and I guess I'm fine dealing with a database that doesn't validate date formats strictly if I can use the damn database without hassle. I am totally fine using Postgres at a company or with another DB developer who knows how to set up databases properly, but if I am starting a new project, I am going to use MySQL, period.
If you're on a Mac and don't want a menubar icon (I decided this recently), `brew install postgres`, and then write yourself some functions to make starting and stopping easier:
Postgres.app is nice when it works. When you install the latest version the psql tool it bundles and exposes in the menu assumes you have a db that matches your user name, which, of course, doesn't exist, so you can't connect to the Postgres :-)
It also doesn't work that great if you already have another version setup running on the default port.
`lunchy start postgres`
The most utterly infuriating part of a new Postgres install is getting authentication set up. I shouldn't have to expressly edit a freaking INI file for basic user access, damnit!
With the tentative upsert changes made in 9.4's commitfest, the initial user config problems are my last major problem with Postgres...
Also, why don't you use Heroku PostgreSQL app which is just one-click starting self-contained app?
If you're on FreeBSD/Linux desktop, how can you fail to install? Of course it doesn't work just right after install, and it needs manual initial configuration for security. But that shouldn't be hard to any Unix family server developer.
If you're on Windows desktop, why don't you run your own FreeBSD/Linux server VM instance for your own development?
If you're using Windows server… with MySQL… Hmm then I am sorry. I have no more idea.
Makes it very easy if you are Mac, Debian based shop --- or if you just use vagrant.
Currently this looks like:
1. Make sure we have the version of ruby we want
(if not use rvm to install)
2. Make sure our db is installed
(if not installed use brew (make sure we have brew) to install it)
3. Create our database and users on the new db server
3. Run rake db:setup
sudo apt-get install postgresql
Who could install this nonsense??