Disclamer: Linkurious CEO here, the tool used to explore the Neo4j graph database used at NASA.
There is also an extension to it called TiddlyMap which displays several of the properties mentioned in this article (edges with properties, etc), but again, requires configuration to get just so.
If you're game to do some tinkering, I've found it to be hackable to some very deep levels. Another nicety is that it's all just a single HTML file, so it's madly portable (I can use the same site on my phone and laptop).
All this being said, there is a growing list of features that I would like to see in Tiddlywiki that I'm not sure I can hack in myself, so I suppose I, too, am looking for the "one true knowledge management" solution
Once you need "big data" for your personal Knowledge Graph, you can use other RDF stores, without vendor lock-in.
Jerry Michalsky is among the more notable users.
address at domain dot com :)
PS: Sent you an email. :)
Otherwise see Marviel and DredMorbius's suggestions both are worth checking out.
Second, I have no idea what you're talking about. The submitter seems to have exactly six submissions so far, and not a single other one about this. [Edit: Obviously mistaken on that point]
I was always blown away with how easy it was to turn around a very stable and useful system where the customers could actually understand the data model and refactoring was easy to reason through.
Graph databases FTW.
Would be sweet to have a similar system in FOSS middleware on top of Neo4J or OrientDB.
Disclaimer: I don't work presently work for Dassault. I just really like their DB kernel ;)
The first one I managed to click on was related to a fire in an employee's car: https://llis.nasa.gov/lesson/943
Certainly something NASA employees need to be aware of.
You can optimize for exactly the types of queries that you want graph databases to answer: shortest path, path finding, etc. Relational databases / document databases are (generally) very poop at those types of queries because those are not the types of queries people want to run on those databases. In a "graph native" database, everything down to the storage on disk can be optimized to perform graph algorithms.
There's years, sometimes decades, of engineering that goes into databases (I'm thinking of PostgreSQL and Cassandra, both of which have graph "layers" available). A lot of the engineering work is non-graph specific: ACID, how to handle transactions, distributed computing, WAL, replication.
Why re-engineer all of those just to perform graph operations? More quickly.
Also, I can send you a good paper by the founder of DGraph Labs if you're really curious.
“So how did we build this thing with the smart folks at NASA as partners and customers? The key takeaway here is that a Knowledge Graph platform is a Knowledge Toolkit plus a Graph Database, and all of those components are critical at NASA.
Doing this with a plain graph database isn’t going to work unless you want to do all the heavy lifting of AI, knowledge representation, machine learning, and automated reasoning yourself, from scratch. I’ll wait while you decide…didn’t think so.”
I can think of plenty of examples at my work where spidering a website and displaying it in a graph would be really cool.
Our wiki would be one for sure.
Links: https://neo4j.com/ https://linkurio.us/
More info about this use case here: https://linkurio.us/blog/how-nasa-experiments-with-knowledge...
The screenshot in the article is from Linkurious (without any mention in the article, which is strange).
Spoiler: Linkurious co-founder here.
Nuclino (https://www.nuclino.com/) looks promising, trying it out now.
In my experience this exploring thing kinda only makes sense when you want to document doing/trying the same thing again ( which NASA probably is). If you are just documenting how to connect to a database, set something up or similar it, to me, falls pretty glat. Maybe I'm using it wrong...
No idea what they use under the hood.
Source: Use it where I work
Currently I am putting links on GitHub PR descriptions so I know in my deployment GitHub repo, Who releases What, When and in Which cluster (where)
The PRs contain links to Jira tickets.
So all in all if you “sprinkle” enough links on GitHub Jira, I essentially can click through them and answer the question, how that ended up here? What changed? Where is the bug?
But I feel like this set of links referencing GitHub, Jira, PRs, Commits, Error Reports would be really fitting in some kind of graph
It does share the big weakness with all the other such databases though, very hard to convince people to use it, specially to add and maintain content.
=== Shortest Paths
1a. Referral: "Who on our team connected to which leadership at Apple?"
1b. Supply Chain, AML, entanglements...: "How are these companies related, even if 5 companies away, and across all sorts of relationship types?"
=== Neighborhood (incl. multi-hop):
2a: 360 context on a security/fraud/ops incident:
2b: fraud rings:
2c: Journeys (customer, patient, ...)
=== Whole system optimization / compute:
Personalized pagerank, supplychain optimization, business process mining, ...
The above can be extended, such as by adding in compute (correlation, influence scores, ...). That feds into viz / recommendations / decision making.
or: Not all uses of graph are end-to-end. We often get used with a graph db to improve understanding it (our viz scale 100-1000X over the tools here via GPUs)... but folks may instead plug their graphdb into a tabular frontend. Or use us with a tabular system like Splunk/Spark/Elastic. So the above can be hard to write in Splunk/SQL, or slow to run, or hard to visually understand.
For example, say we have a graph of movies, actors, reviewers, producers, etc. Here's Cypher query that returns the names of people who reviewed movies and the actors in these movies
RETURN DISTINCT r.name AS Reviewer, m.title AS Title,
m.released AS Year, rev.rating AS Rating,
collect(a.name) AS Actors
WHERE 1990 <= m.released <= 1999
RETURN m.released, collect(m.title) as titles, collect(p.name) as actors
ORDER BY m.released
Now imagine your join to become a primary perspective to look at your data.
Then you'd see creditcard transactions (who buys what when?) or maps are better represented as a graph.
I know for example TomTom uses neo4j to validate map edits in production.
In a graph database, you've effectively taken your 'join' penalty at the point of ingestion and you have an expressive query syntax to describe the pattern you're trying to match.
I'm working in an even smaller company now - previously it was about 40 employees, now it's a handful. There are docs for the important things so there is no single point of failure, but very few day to day things are written down (like whom to report to that you're ill). As we grow, it's slowly becoming worth it to document that (single source of info instead of either having to bother the big boss or having different sources) and I'm looking at options to organise it. Organising it topic-based (graph(-like)) is an interesting alternative to the standard info dump with a search feature (wiki).
Trying out Nuclino just now and putting some items into it, I additionally noticed that having a separate system from your actual knowledge database can also be useful: info pages are on the wiki, custom tools are in different git repositories, project info might be in some task manager... If you have a separate system (such as a graph) that just points you to the right URL (wiki/task manager) or folder within a git repository, the system can outlast any of the individual products being used. Then again, having a layer of indirection makes it more time-consuming to use when you know that your info page is going to be in the wiki. I guess it will have to be very quick to call up and integrated nicely to make it worth it for others to use.
But it's a good point you make. Now that I write it like this, it makes it more clear for myself how the system would work. It wouldn't just be another separate system, it would be the index for all the systems and whenever someone writes a new page where-ever, they should be required to link it into this graph (or whatever form it takes).