For people here experienced with graph databases, do you typically use the graph db as your primary data store or do you use it in combination with something like postgresql? If you're using both, can you talk about how that works and if it's been successful for you?
I'm curious because I've had a couple situations where I thought using neo4j (or some graph db) would be a natural fit for something I wanted to do, but otherwise I thought most of my other data fit into postgresql just fine. My instinct is that if I'm doing this in a web app then querying from two different databases is going to slow down my responses a lot.
From your post: "The biggest issue was that we had data in the graph, that just didn’t feel right in the graph instead of a relational DB." -- What exactly was the problem other than managing complexity? I read through your posts and I didn't see any mention to some of the technical aspects of your issues with Neo4j. Was your data just so large that going the relational table route gave you a better understanding of this complexity?
Actually you can have a graph database in postgres as well! Look into queries using "WITH RECURSIVE", and you can do pretty much anything a graph database could do. From the specific use case I had, there was actually no difference in performance between neo4j and postgres. I really enjoyed using cypher, and it was a pain to translate a query written in a graph-specific query language to a postgres equivalent with "WITH RECURSIVE", but because postgres was already a part of the stack I stuck with it.
Depending on the data model there are a few ways to deal with cycles. My project involved a friendship graph, and doing queries like "find friends of X", or "find people who like X, who are within 2 degrees of friendship-separation from Y". These are problems where it was okay to have cycles in the graph, as the traversal depth was hard limited. You'll have more problems if you're doing questions like "find the cheapest path from A to B", although there is certainly a way to cope with cycles there as well.
I don't currently use graph databases in production, but I do, however, have some experience.
I use both, in a similar vein of "using Elastic Search". It could be your primary store, but it's sometimes it's more pragmatic to have two sets, a "solid" base.
This is not to say that it can't be done. What I'm stressing is that larger "changes" are hard and difficult to handle - which means a lot in the start of your process, and less in the end, as in, when you're deciding how to model your data. For instance node layout (new properties? different type? other constraints?), and mass updates are also a bit cumbersome.
Usually I have more than one SQL table (naturally) since the data I've used in graph databases is mix and match (otherwise I'd just use a fixed schema and some relational DB).
-- As for "how that works", for me it's:
routinely update from my base database with queries alike: ID > last ID.
This has worked as expected, in terms of what data you get in, and which limitations you impose (e.g. timeliness).
I'm currently making a shift to running all data in my graph database as I've settled on a model (which edges, which nodes, which properties).
> querying from two different databases is going to slow down my responses
True, but depending on your data (do you know one of the queries beforehand - e.g. is your postgresql query enriching whatever your graph query returns) you might have success tying (inserting) some of the SQL data to your graph database.
I've got a lot of background in RDF graph stores. It depends a lot on your usage, but I think for your typical web app, you'd be better off using a Postgres install, and making use of fancier features like WITH RECURSIVE as necessary. Graph stores often miss out on features like guaranteed relational integrity and guaranteed constraints, which I find invaluable for safe application development in the face of concurrent updates.
Graph stores are typically much slower for repetitive data that fits cleanly into a relational model. This isn't to say they're not useful - for more irregular data they're a fantastic fit - it's just that very irregularly structured data isn't the common case.
Of course, you can always use two different stores - much like many sites do with a separate lucene/elasticsearch index for text search - but your graphing needs must be relatively componentised for that to work well.
The ones I've spent most time with were Jena/TDB, Virtuoso, 3store, along with a couple of proprietary engines. BigOWLIM is also a strong contender in the space. I've used them in the context of both object storage and semantic web data storage.
My experience is that if you don't need constraints/enforced relational integrity, RDF stores make for really simple/easy object storage. There's definitely a performance tradeoff, though - depends on what you need, really!
Hmm. I guess what I'm getting at is that I typically start building a web app by writing model objects that get turned into db schemas by the ORM (in Python, this is usually the Django ORM or SQLAlchemy), and the ORM turns attribute access into joins (either eagerly or lazily).
So an ORM that usefully interpreted model subclassing etc. and created self-joining tables and could query the resulting model using RECURSIVELY WITH as appropriate would be a real boon.
A graph can do what a table can do and a lot more, but that's usually not the whole issue. In practice you need to consider things like speed, volume, scale, consistency, redundancy, computation, ad-hoc vs. planned operations, use of resources (disk, memory, CPU, GPU), etc. And as most NoSQL systems just aren't as mature as their table-based counterparts, you'll also have to factor in your tolerance for issues and general system crankiness. All that being said, some applications just cry out for graphs, particularly apps that involve items linked in pairs. Social apps (people linked by friendships), travel (places linked by flights), communications (people linked by messages), all of these can play hell with an SQL database but are naturals for graph databases.
I agree with the idea that tables are just strict graphs and as such a graph database is usually capable of substituting a relational database. I think many graph DBs lack a sophisticated enough query language to bridge that gap. At Orly (https://github.com/orlyatomics/orly) we're working on a powerful query language, and it's nice to see that Cayley is doing the same.
> querying from two different databases is going to slow down my responses
I think querying 2 different systems tends to be slower, but more importantly you lose transactionality. If you can use a single system that is at least on-par with your relational system for your run of the mill data and have a very powerful graph then that's a big win.
I work on a graph focused firm, XN Logic, where we use an unhydrated graph to store and analyze the relations, the appropriate store for large volumes of information, and Datomic to store mutations for the graph for history analysis.
I first started with neo4j as a primary data store for our semantic graph but there are some limitations that are forcing us to look for alternatives.
1. Adding edges to a neo4j graph is a painfully slow process. For a large graph with a few million nodes - it'll take days.
2. Scaling neo4j on a cluster is either not possible or it's a painful process. I'm yet to discover this.
However, the greatest advantage that neo4j offers is the ability to query a path. So far, no other graph databases that I know have this ability (including Apache spark and giraph).
It's quite possible to build a directed graph database as an adjaceny list in redis. We tried this and it's super fast and scalable. However, querying is very painful.
1. Adding nodes and relationships in Neo4j does not have to be slow. It really depends on how you are loading that data in. Neo4j provides many options for data import and a transactional endpoint over HTTP for batching transactions and decreasing disk write overhead.
2. The reason Neo4j is the only database that allows you to query a path is the same reason that setting up clustering or sharding is difficult. If your graph is complex then the problem is "How do I split up these subgraphs into shards so that traversals don't have to traverse across shards?" -- Building a giant adjacency list and using that as a traversal index is a clever idea, I must admit. :)
If you query the two databases parallel then the response time should be equal to the slowest one of the two database responses (not the sum of them).
But if you use two database then you have to maintain both of them and if they are on the same server then sharing the same resource could make them slower than just using one db.
We use an RDF datastore (OpenLink Virtuoso, clustered edition) as our primary datastore. We use it in combination with Apache Solr to provide fulltext search over various resources that we extract and pass through an Indexing pipeline to go from RDF Graph -> Search Document.
It's worth noting that Virtuoso (produced by my employer, available in free Open Source and paid Commercial variants, http://virtuoso.openlinksw.com/features-comparison-matrix/) is a hybrid Relational/Graph/XML/FreeText storage and query engine, which natively supports SQL, SPARQL, XPath, XQuery, and many other open standards. It might satisfy the OP's needs on its own.
Virtuoso's support for open standards makes it easy to use it as a complete solution covering all the bases, or, as in @philjohn's case, to plug-and-play with best-in-breed solutions along any axis where our implementation proves not to serve your needs for any reason. (We do want to know how and why we don't measure up, so we can improve that aspect!)
I've been using a graph db as my primary database with my last project(neo4j) and it has been a pleasure. I wish a good graph database was hooked in with the new Google Cloud stuff so it could be queried/visualized/performance analyzed in a similar fashion to the BigTable demo earlier
I'm curious as to how you're dealing with the licensing costs, as they shut down most hopes I had. Granted, Neo4j said they had a "startup" license but due to family issues i never followed up. Still seems a bit extreme to me for the price, but I also (at the time) couldn't really compare it to much else.
And on the topic of Cayley... I'm stoked. I'll give it a try tonight. Easy to use, and graph databases, haven't really gone hand in hand for me.
> I'm curious as to how you're dealing with the licensing costs, as they shut down most hopes I had. Granted, Neo4j said they had a "startup" license but due to family issues i never followed up. Still seems a bit extreme to me for the price, but I also (at the time) couldn't really compare it to much else.
Did you want to get support from the company that makes neo4j? If not, then you don't need a license.
Not entirely true. The only free version is the "Community Edition", which runs on a single node; no clustering support, no monitoring, no hot backups, no caching. 
It's pretty much useless in a server environment since even replication isn't possible; if you want a redundant setup -- which you will -- you will have to keep the nodes synchronized yourself. Not to mention that since Neo4j is an in-memory database, it puts a hard limit on your dataset size.
They have a "Personal License" , but it lasts for one year and you're not allowed to use it if you have capital funding or a certain amount of revenue.
Actually much of this is not true.
* All Neo4j versions have caching
* Neo4j Enterprise is available for free for any AGPL project and for personal use and early startups
* Neo4j is not an in-memory database, it is a persistent, fully transactional database, it uses the available memory for caching the hot dataset
Where using more than one instance would be desirable (I'm not trying to over optimize-- I swear! ;-_-). I suppose one server can go pretty f#€£ing far. But all the clustering requires a license (~$12k per server for start ups?) unless the code which uses it is GPL or AGPL (I think?).
Anyone who wants to explain the AGPL in layman terms would be greatly appreciated. Or specially Neo4j's application of it. To me, it seems cost prohibitive for lean startups, of one or two people.
Again, they've said contact 'em-- so it may be case by case. I could just be worrying about non-existent problems.
I tried Titan and Orient, I didn't find either as nice as Neo4j to use in my code. Though Neo4j, for me, was much more difficult to setup and use as a cluster (at the time).
... And more on topic, I'm about to install Cayley!
neo4j uses the GPL (specifically version 3), not the Affero GPL. You can build a service based on the community edition without sharing the source of your service; you only need to distribute source (under a GPLv3-compatible license) if you distribute your service to others.
(Summary of the Affero GPL: you must distribute the full source of your service, under an AGPL-compatible FOSS license, to the users of your service. Summary of the GPL: you must distribute the full source of your program, under a GPL-compatible license, to anyone you distribute your program to.)
It does look like the "Community" edition leaves out the clustering features, so if you need those you'd likely need a license for the proprietary version.
Neo4j is an open source graph database. The community and enterprise editions can be used in production without a license (including HA clustering). Licensing is required when you choose not to open source your code that uses Neo4j. When you have a commercial application of Neo4j, the price for the license is well justified so your customers are not impacted when an issue arises in your implementation. Most resolutions are a matter of creating an unmanaged extension that runs code embedded in the JVM.
This is one place where Log-Structured-Merge systems can really prove themselves. There is rarely a need to sync on every change as most of them are independent. You can usually gain lots of write-throughput by syncing at the speed the hardware is optimized for and squeezing as many append-only changes in to those logs. At Orly we've spent quite some time looking at ways to deal with these bottle-necks.
I'm a developer on Orly and I focus on the language portion of the project. Orly takes a different approach in this regard by providing a general-purpose functional language as its "query" language, rather than providing a language specifically designed to traverse/query the graph. This approach allows us to perform not only queries but also arbitrary computations on the server-side and take advantage of the powerful resources available on the server such as large memory, CPUs and perhaps most interesting are GPUs.
Getting performance out of a graph often depends on being able to express your query effectively. The last thing you want to do is plunk down a cursor on a node and have your client start wandering around the graph, following edges. All that back-and-forth chattiness is a non-starter network-wise and it gives the database engine essentially no chance to optimize the query. I've been favoring scripts that let me use free variables--like Prolog logic variables--to describe shapes in a graph and then let the database server find bindings that match. Like 'print a.name, b.name for a, b, x, y where a is_friend_of b and a is_friend_of x and b is friend_of y and not x is_friend_of y and x.city == y.city'
Almost all graph databases use gremlin as querying language but I really love orient db's approach of using sql for querying graphs. It feels more natural in my opinion plus it lowers the barrier to entry for people who already know about sql.
Some of the things I like about cayley
1. Switchable backends(I wonder if I can configure it to use couchdb as a store)
2. Documentation get right to the point. When I first tried my hand at graph databases I could not understand where to start but cayley's approach is pretty straight forward and it wins plus points from me for including a big dataset :)
A question: I see no mention in the docs about running it on multiple nodes (where does it stand with regards to CAP etc)
I have. I'm on the team that's been developing it for the past four years. It's nice to see graph databases getting some popular traction at last. (I was into them before they were cool, of course.) I've only just started looking at Cayley but it looks like there are some significant differences between the projects. Orly is designed for high-speed, high-volume applications that need large-scale storage and consistent transactions. We're more OLTP than OLAP, which seems to be the way Cayley leans. Most graph systems tend toward analytic applications.
Maybe so. Orly has a somewhat leveldb-like component called a Repo which might slot in well. A Repo is a log-structured merge storage system that also provides indexing, access to previous values, and, most importantly, consistent reads and writes, even across multiple Repos. It also makes good use of resources (RAM, SSD, HDD) to optimize performance and cost. (Also working on letting it run on GPUs, for insanely high performance for people with the power and air conditioning budgets.)
I've seen a couple of mentions of graph dbs on GPUs in this thread, and it does seem like a (somewhat) obvious fit. Anyone aware of any projects that make (good) use of that right now? Not necessarily a stand-alone service, but also things like an embedded graph db ("berkley db for graphs") or something like that?
Can anyone provide a working example for the visualization feature? The docs say that you can use the Tag functionality to label source/target nodes for sigma.js rendering, but bridging the gap between that suggestion and the actual query does not seem trivial.
However, I'll point out that most of the actually-used open-source projects coming out of Google aren't actually "Google projects", they are "projects by Googlers released under the OSPO process". LevelDB (Jeff Dean & Sanjay Ghemawat), Protocol Buffers (Kenton Varda), Guice (Bob Lee, Jesse Wilson, and Kevin Bourillion), Gumbo (myself), and angular.js (a team within DoubleClick) all started out as small internal projects built to scratch an individual's itch that were then released externally because hey, why not. The "corporate" open-source projects have been things like Android, Chrome, GWT, Closure, Polymer, etc.
It does have bulk import; I've been loading largish subsets of the Freebase dumps.
Load speed is pretty good (into persistent storage, I assume) and can be improved with some of the database parameters. A rough estimate is that a million triples or so takes about 5 minutes, but that slows down as it gets bigger. 134m triples took me 6-8hrs, so I slept on it.
Virtuoso 7.1. and OWLIM 5.5 have similar loading speeds. In this case the decompression algorithm is often the bottleneck needing multiple files to be read in parallel to go faster. Oracle 12c Semantic Network and Yarcdata uRiKA can also go faster in loading if correctly set up.
Over my dead body. Homebrew messes up my /usr/local, leaving its heaps of crap everywhere. I don't find that acceptable. MacPorts at least has the decency to put itself into /opt/local, which isn't normally used by anything else.