
Show HN: Open Synthesis – open platform for CIA-style analysis of current events - tschiller
https://open-synthesis-sandbox.herokuapp.com/
======
tschiller
Author here, I started the project to help people deal with that information
overload that modern media has brought with it.

The platform currently supports the Analysis of Competing Hypotheses (ACH)
technique [1]. Eventually, after we figure out ACH, I hope to roll out
additional collaborative analysis techniques developed in the intelligence and
business communities.

The submitted link is for a playground instance that has no editing
restrictions and that doesn't require an email address to register. Please
play nice! The official site is at
[https://www.opensynthesis.org](https://www.opensynthesis.org). You can
request an invite for the official site via Google Forms at
[https://goo.gl/forms/P6Lgx3nqAhD4zQ8v1](https://goo.gl/forms/P6Lgx3nqAhD4zQ8v1).
We’ll be opening the site up once we get a better handle on moderation
features.

The project is open source (GPLv3): [https://github.com/twschiller/open-
synthesis](https://github.com/twschiller/open-synthesis). It’s built with
Python 3 + Django. You can deploy a private instance to try via the Heroku
Button.

The platform is very young and is rough around the edges. There's still a lot
of interesting challenges to solve, especially with respect to user experience
and community design. For example: what's the best commenting system? What's
the best approach to moderation for politically-sensitive topics?

Want to help? I'm looking for contributors to help out with everything from
design (we need a logo!) to devops and community building. Check out the 'Help
Wanted' label on the issue tracker for concrete ideas of how to contribute.
Alternatively, shoot me an email (email is in my profile).

Project Twitter: @opensynthesis

[1]
[https://en.wikipedia.org/wiki/Analysis_of_competing_hypothes...](https://en.wikipedia.org/wiki/Analysis_of_competing_hypotheses)

------
screwston
This is very interesting, I'm curious as to how this works outside a
controlled environment like the CIA.

What sort of arbitration process do you have for judging the quality of
evidence and the degree to which evidence agrees with hypotheses?

What's to stop trolls from spoiling the process through the above methods, or
through adding extraneous hypotheses to confuse the process?

~~~
tschiller
Thanks for your interest! Adapting the technique for the internet is one of
the main challenges the project is trying to address. That's the reason the
main site is invite-only right now.

Here are some thoughts on the points you bring up:

\- You can currently dig into the evidence quality by looking at the
corroborating/conflicting sources that users have added. Users can also tag
sources based on their quality, e.g., state-sponsored media, secondhand
information, etc. Eventually I want to arrive at some metrics for
evidence/source quality.

\- The degree to which the evidence agrees with the hypotheses is handled by
the evaluation/assessment process. The site merges the evaluation/assessments
from all the participants and highlights areas of dispute. In the future,
we'll probably end up allowing users to mark which evaluators they trust
(e.g., verified journalists).

\- As far as trolls go, we're exploring the space of moderation and anti-troll
measures. This will include automated measures like rate limiting, as well as
moderation features like flagging evidence/hypotheses as irrelevant,
duplicate, etc.

A lot of potential ideas are captured on the issue tracker:
[https://github.com/twschiller/open-
synthesis/issues](https://github.com/twschiller/open-synthesis/issues)

