The RDF database marketplace is very established , and the likes of MarkLogic, DB2, and Oracle have clearly encountered profitable reasons to add RDF support. I believe RDF has good traction in knowledge-intensive industry domains such as clinical research and life sciences.
Disclosure: I work on Crux  which adds bitemporal versioning and eviction to a document->triplestore model running on top of Kafka.
that 80 node cluster of dl380s was a beast to operate, but damn was it spiffy when it was working well
Seems to me they'd be using it internally.
Bonus question: What are some real-life use cases for triple stores?
Sounds contrived, but can be handy in many uses cases.
Triple relationships like the above are the kind of queries a Prolog engine can answer well (and far more).
Semantic Web's RDF is also like this.
s-p -> o
p-o -> s
s-o -> p
and then you have indexes which are good for those triple patterns.
The core table in the salesforce.com system consists of triples, but salesforce.com will materialize whatever indexes and views are necessary to make things fast based on automatic run-time profiling. Their patent on this should run out just about now, so this feature may turn up in real-life triple stores where it would make a big difference in practicality.
The NSA has been shopping around for a triple store which could ingest around 1 trillion triples per day.
The BBC made a nice web site for the world cup which used forward chaining inference in a triple store to determine the consequences of each goal, so the tables would all adjust whenever anything happened.
But then, why stop at a KV-store? A set with entries “Bob:Knows:John” will work just as well, if you ignore performance.
But then, why stop at a set? A string “Bob:Knows:John;Bob:Loves:John;John;Is;vegetarian” works just as well (conceptually!)
IMO, a major real-life use case is as a means to produce PhD’s :-). The concept is enticing and easily grasped, but there are zillions of papers to write on query planning, automatic storage optimization, discovering heuristics, etc. It’s just like the early days of SQL: you don’t have to read decades of papers to move to the front of development.
On the other hand, I’m not really familiar with Go, so I may be reading it wrong.
in one version, you have these two ideas represented:
[ann,suspects,[bob, likes, cake]]
this may change in another version:
a decent triple store will allow you to version these ideas and explore how they change over time, or maybe query by predicate
A triple store can more quickly answer queries about triples. The reason to use triples is that it is what you naturally get when you try to store structured relational data where the schema changes quickly.
The Python fabric package is really popular.
Looks like it is written in Go too. I can see your's being much simpler to get up and running initially though. Looks like akutan isn't as simple since its built on docker and is a daemon.
Triple stores support a disciplined set of primitive types that come from xml schema, so you have "xsd:integer", "xsd:datetime", "xsd:decimal", really the critical things that are missing in JSON. That is, there is a kind of fact where the object is a literal.
Triple stores also support facts where the object is an identifier for another object. That could be a URI which names it, or it could be an internal "blank node" identifier.
Other kinds of "graph database" have different semantics, for instance they might not have support for literals, or have a different set of literal data types, or they might let you attach facts to the edges (hypergraph, property graph, ...)
I looked at the report card already and seems like you've done a great job, so you'll have no trouble getting added!