One-level deep inheritance can be great for abstracting common functionality, and multi-level inheritance sometimes makes sense in very large applications. What you want to watch out for is multi-level inheritance with an inheritance hierarchy that parallels some hierarchy that is merely incidental to the code's functionality, like the page navigation hierarchy.
I would say that having a multi-level inheritance hierarchy with greater than "constant" number of children per parent in one package is a code smell.
There are few things that truly exemplify quality; I count the simple but not minimalist visual aesthetic of GNOME 2 as one of them.
It is unfortunate that they changed the aesthetic so much with GNOME 3, particularly making it more minimalist. I prefer the prevailing toolbar aesthetic in GNOME 2 of icons with text, and the less integrated desktop management scheme at a high level in comparison respect to the Shell. Even if the old toolbar style wasted space and the new desktop scheme provides more power, they destroyed the configurable simplicity of GNOME 2 in favor of minimalism. In addition, you have to respect the history of a product's UI if you wish to retain users who want to stay comfortable in the software they use.
Haskell is mainly developed and used by academics in the field of programming language theory, and occasionally such academics who do some work in industry, so it's not really the whole story to call it "software programmers build for each other".
Yeah, many programmers don't user them either (though some of the best ones do). Those tools are not beginner friendly. Actively resist GUI usage patterns, and are less useful to newcomers than a simple text editor.
They also include alot of cool timesaving features and are very customizable.
That's the absolute additional deaths, not the change ratio.
> Planned out-of-hospital birth was associated with a higher rate of perinatal death than was planned in-hospital birth (3.9 vs. 1.8 deaths per 1000 deliveries, P=0.003; odds ratio after adjustment for maternal characteristics and medical conditions, 2.43; 95% confidence interval [CI], 1.37 to 4.30; adjusted risk difference, 1.52 deaths
per 1000 births; 95% CI, 0.51 to 2.54). The odds for neonatal seizure were higher and the odds for admission to a neonatal intensive care unit lower with planned out-of-hospital births than with planned in-hospital birth. Planned out-of-hospital birth was also strongly associated with unassisted vaginal delivery (93.8%, vs. 71.9%
with planned in-hospital births; P<0.001) and with decreased odds for obstetrical procedures
It's called NoSQL, which removes the need for schema migrations for things like adding or deleting columns.
This could be solved for relational databases if you implemented application-level abstractions that allowed you to store all your data using JSON storage, but create non-JSON views in order to query it in your application using traditional ORMs, etc.
So, store all data using these tables, which never have to be changed:
- data_type
- data (int type_id, int id, json data)
- foreign_key_type (...)
- foreign_keys (int type_id, int subject_id, int object_id)
(we'll ignore many-to-many for the moment)
And then at deploy time, gather the list of developer-facing tables and their columns from the developer-defined ORM subclasses, make a request to the application-level schema/view management abstraction to update the views to the latest version of the "schema", along the lines of https://github.com/mwhite/JSONAlchemy.
With the foreign key table, performance would suffer, but probably not enough to matter for most use cases.
For non-trivial migrations where you have to actually move data around, I can't see why these should ever be done at deploy time. You should write your application to be able to work with the both the old and new version of the schema, and have the application do the migration on demand as each piece of data is accessed. If you need to run the migration sooner, then run it all at once using a management application that's not connected to deploy -- with the migration for each row in a single transaction, eliminating downtime for migrating large tables.
I don't have that much experience with serious production database usage, so tell me if this there's something I'm missing, but I honestly think this could be really useful.
> With the foreign key table, performance would suffer, but probably not enough to matter for most use cases.
Citation needed :) That's going to really depend.
I'm not for or against NoSQL (or any platform). Use what's best for you and your app!
In our case, NoSQL makes for a bad database approach. We do many cross-sectional queries that cover many tables (or documents in that world). For example, a Post document doesn't make a ton of sense, we're looking at questions, answers, comments, users, and other bits across many questions all the time. The same is true of users, showing their activity for things would be very, very complicated. In our case, we're simply very relational, so an RDBMS fits the bill best.
Sorry for being unclear. I'm not proposing NoSQL. I'm saying that many NoSQL users really mainly want NoDDL, which can be implemented on top of Postgres JSON storage while retaining SQL.
- data (string type, int id, json fields)
- fk (string type, int subj_id, int obj_id)
select
data.id,
data.fields,
fk_1.obj_id as 'foo_id'
fk_2.obj_id as 'bar_id'
from data
join fk as fk_1 on data.id = fk_1.subj_id
join fk as fk_2 on data.id = fk_2.subj_id
where
data.type = 'my_table'
and fk_1.type = 'foo'
and fk_2.type = 'bar'
What would the performance characteristics of that be versus if "foreign keys" are stored in the same table as the data, if fk has the optimal indexes?
If your database doesn't enforce the schema you still have a schema, it's just ad-hoc and spread across all your different processes, and no one quite agrees what it is. In the real world as requirements change and your app/service increases in complexity this becomes a constant source of real bugs while simultaneously leading to garbage data. This is not theoretical, we have a lot of direct painful experience with this. Best case scenario your tests and tooling basically replicate a SQL database trying to enforce the schema you used NoSQL to avoid in the first place.
Indexes are fast but they aren't magic. A lot of what a traditional SQL database does is providing a query optimizer and indexes so you can find the data you need really fast. Cramming everything into a few tables means everything has to live in the same index namespace. Yes you can use views and sometimes even indexed views, but then you have a schema so why jump through hoops to use non-optimized storage when the database has actual optimized storage?
Separate database tables can be put on separate storage stacks. A single table can even be partitioned onto separate storage stacks by certain column values. Cramming everything into four tables makes that a lot more complicated. It can also introduce contention (depending on locking strategies) where there wouldn't normally be any.
IMHO most systems would be better served by sharding databases than by using NoSQL and pretending they don't have a schema. If application design prevents sharding then scaling single-master, multiple-read covers a huge number of cases as well. The multiple-master scenario NoSQL systems are supposed to enable is a rare situation and by the time you need that level of scale you'll have thrown out your entire codebase and rewritten it twice anyway.
The key to schema migrations is just to add columns and tables if needed, don't bother actually migrating. Almost all database engines can add columns for "free" because they don't go mutate existing rows. Some can drop columns for "free" too by marking the field as obsolete and only bothering to remove it if the rows are touched.
Postgres (and at least one other RDBMS) has partial indexes, which pretty much solves the index namespace problem you mention: http://www.postgresql.org/docs/8.0/static/indexes-partial.ht... Partial indexes are integrated into the proof-of-concept repo I linked.
Storing a data type field in the generic storage table enables the same partitioning ability as a standard schema.
99% of NoSQL database users just don't want to deal with migrations, even if they're "free" (another big issue is synchronizing application code state and DB migration state of production, testing, and developer machines), so what they really need is NoDDL, YesSQL.
> Almost all database engines can add columns for "free" because they don't go mutate existing rows. Some can drop columns for "free" too by marking the field as obsolete and only bothering to remove it if the rows are touched.
Didn't know that, thanks.
> It can also introduce contention (depending on locking strategies) where there wouldn't normally be any.
Didn't think of that. I'm aiming this at 99% of NoSQL users in which doing things you could do with SQL requires much more effort, so allowing them to do it with SQL can accept a modest performance degradation, but if you have any good links relevant to how this storage design would affect lock contention, please share.
A team of developers all working off of master doesn't necessarily require much communication about who's working where.
If your code is well organized into modules broken down by functional area, it should reduce the number of potential conflicts.
Also, fear of merge conflicts is somewhat unjustified; most conflicts can be resolved and rebased against using git rebase without that much work; the git rerere option [1] and git imerge [2] can also help with this.
If developers would actually learn how to resolve merge conflicts, and not be afraid of the occasional conflict resolution which requires understanding the other change and how to write new code that incorporates both changes, it's less overhead than communicating about pending changes.
You're right, there are a lot of devs like that. In this day and age of DVCS any developer worth their salt should be able to manage merges properly. It sounds like SO might not hire people who are incapable of understanding merging.
It seems like NoSQL could be left behind except for the .001% of use cases that actually require it and can't be easily replaced with extensions or (hopefully) automatable configurations of Postgres, but it would require application-level abstractions, and the database community doesn't value those enough, as evidenced by SQLAlchemy [1] not being highlighted on the homepage of every RDBMS project because of the awesome power and flexibility it gives the developer.
Specifically, a JSON column should be used to store everything other than primary keys and foreign keys, and views and indexes should be automatically created based on the schema defined in the application (i.e., get the schema from the ORM at deploy time and post the data to a schema/migration management system) using something like https://github.com/mwhite/JSONAlchemy
It is entirely possible to implement the CouchDB or MongoDB API on top of Postgres JSON, for instance.
Enough of it's luck that it's unlikely that a team would go undefeated for half of a season, but it's also the fact that it's hard to get a team with only the best players; most teams have a mixture of player ability. If you put the best major league team in the minor leagues they'd probably have a 90%+ winning percentage. (Highest MLB season winning percentage is 76%, longest winning streak is 26 games out of 162, including a tie.)
Two additional points. 1) The mixture of player ability is mostly due to cost penalties for paying players a higher salary (although Yankees just pay it). Not as distributed as (american) football, but still distributed. 2) Note the longest winning streak is 26 games out of 162... out of 162. Longest basketball streak is 33 games (not enough for an undefeated 82 game season). The ability to have a 162 game streak that starts on game one and lasts through the baseball season is essentially impossible. It is possible to have an undefeated season in american football where the season is 16 games (but it is rare -- hasn't happened since pre-NFL 1937, 1948). Baseball has an order of magnitude more games per season.
EDIT: I knew that wasn't right (that's what I get for haphazardly scanning Wikipedia for facts). A couple others since recently 2007 Patriots and 1972 Dolphins also had undefeated seasons in NFL. Still rare, and still a fraction of the number of games. Point is: luck is only part of it, no matter the sport staying undefeated is difficult -- especially for a season when season means 162 games! Thanks for pointing out the mistake everyone. Sorry for the confusion!