* Distributed VCS, tickets and wiki
* You can link artifacts together
* Single statically linked binary
* Works on Linux, Mac and Windows
* Easily themeable, looks good (important, this means you
can slap in the logo and colors from the company web
site and avoid lots of questions.)
* Easily hackable
That's quite some understatement, it freakin' runs on toasters (i.e. ~everywhere where you can run SQLite, the only hard requirement I'm aware of is there has to be a C compiler for the platform).
FWIW any database could potentially implement the CouchDB sync protocol. http://dataprotocols.org/couchdb-replication/
Also, Drupal recently announced that they're going to offer a CouchDB-compatible server. So yeah, anybody can implement the sync protocol. :) http://wearepropeople.com/blog/a-content-staging-solution-fo...
It was created with a similar goal in mind (stop writing CMSes over and over) by Brad Fitzpatrick (of LiveJournal/memcached/etc).
 Not the same as a 'knol', which as defined by Google was not a unit. A real knol would be a single atomic fact.
Wittgenstein and Russell tried to build philosophical atomism and it fell apart.
You're making a common mistake, which is to think of Wikipedia as a collection of facts. It's a collection of verifiable statements from reliable sources, or reasonable summaries thereof. "Reliable" is a collective judgment call, and even "verifiable" can be tricky sometimes if it's not an online resource or if we're summarizing.
So what you really want is a kind of source tracing. You want to know that when one statement is put into question, everything that followed from that statement is also questionable.
With encyclopedic content, the consensus 'fact' is useful, plus the level and strength of general agreement over time is useful, plus the alternative 'facts' are useful.
Wikipedia is too much a collection of the most obstinate authors' latest opinions. It's saving grace is the 'History' tab, which adds a lot of value, at least for me.
"37 minutes ago hilyan was hoping that the next generation of wikis (especially as used by wikipedia) would introduce an underlying 'fact unit'  layer on which articles are built, effectively separating edit wars from verification wars.
 Not the same as a 'knol', which as defined by Google was not a unit. A real knol would be a single atomic fact."
I will interpret a lack of responses as evidence for failure. ;)
(let [t universal-timestamp]
(said AndrewOMartin t
(status hoping hilyan
(- t (minutes 37))
(said google (undefined-past)
(let [knol (gensym)]
(not (unit knol))))
(let [knol (gensym)]
(let [next-wiki-generation (gensym)
(introduces next-wiki-generation fact-unit)
(underlies next-wiki-generation fact-unit)
(underlies fact-unit article)
(cause fact-unit (separate verification-wars edit-wars))))
(def footnote-constraint [fact-unit]
(let [knol (gensym)]
(!= knol fact-unit)))
Edit: So a couple of things about this exercise: yes, it took me more time (and space) than your phrase. On the other hand, I specified some extra information. Some ambiguities about your phrase needed to be explicit, so that's a win for a more formal language. Also, there's some more effort involved, so I could have the time to think more about what I say. It would be great to have an interactive tool to think and write in a more formal language, even if it is not meant to be compiled to assembler.
I have a side project that touches this tangentially (the code is made up, though!), but I can't find a lot of time for it :(
Does anybody know if something like this exist? Somethig like a wiki/REPL/mindmap sandbox for concepts.
Cracking this problem could bring about an entirely new era of debate & communication.
Sites like Larry Sanger's infobitt.com attempt to break down news stories into their facts, by letting contributors asses the factuality of each piece of information. However, I believe that fact checking current news stories is over-rated. Why do we need to fact-check the old story, when a more accurate news story will come along anyway? Again this stems from the wiki, formatting the page as an article, rather than, say, a stream of information.
There are other solutions that can mitigate problems that arise from using wikis, such as assigning anonymous editors to check content, and paying editors to contribute. I have implemented many of these on my site, Newslines (http://newslines.org) which crowdsources news, but uses each news event as the primary data type.
But your comments about Local Storage are spot on. Use it as a cache, not a file system.
For example, Chrome stores it here:
on Windows: %LocalAppData%\Google\Chrome\User Data\Default\Local Storage
on Linux: ~/.config/google-chrome/Default/Local Storage/
On a related note, I'm curious to hear what others think about the trend of sending data over-the-wire and rendering HTML on the client. I just started working with Meteor, coming from a traditional web development background, and I see major benefits regarding multi-device rendering and real-time updates. However, I'm still on the fence regarding this paradigm and continue building new projects with traditional server-side rendering and DOM manipulation.
Playing with federated wiki is very convincing that data over-the-wire is the future.
Twitter did this for a while. They still do, but they switched back from a 100% API-driven model to a serverside first render, subsequent API-driven model. That allows them to render the page as quickly as possible (you don't have to wait for the DOM to be ready) while still allowing later partial renders.
I wonder if we used the same page. I just got a welcome page. When I input 'patterns' into the search box and hit return, nothing happened. When I clicked on 'Recent Changes' I see a gray bar reading 'activity,' with no contents.
It appears to be a broken test site to me.
> On a related note, I'm curious to hear what others think about the trend of sending data over-the-wire and rendering HTML on the client.
I think it's evil, given that it requires me to allow code execution in order to read content.
Nope, that just results in a vertically-divided blank white page, with the non-functional dark-grey bar below. It remains blank after minutes.
It's a feature I would like to see more use of, as the context of a subject is often spread across multiple pages.
and a centralized version as a Moodle plugin called Social Wiki: http://www.nmai.ca/research-projects/socialwiki
You can try out Social Wiki online (there's a link to a demo site on the above link).
The UI needs work, but I'd say Social Wiki is functional. Just need to install Moodle first.
[Insert prior-art reference probably involving PARC here]
As far as I know, no one disputes the creation story of the first wiki as being by Ward Cunningham.
Note the use of collaborative editing of text, and optimistic concurrency control for local changes.
> As far as I know, no one disputes the creation story of the first wiki as being by Ward Cunningham.
Nor that the first Model T Ford was created by Henry Ford :)
But imagine if you're trying to converge towards consensus about a historical event. Every historian has their own interpretation and different emphasis. The facts should be the same for all, but the reading of the event and its meaning isn't. And for that reason, support for tracking diverging views would be very useful.
* For various definitions of science
Yes. If everything operated on a pure consensus basis, progress would come to an end.
This doesn't even have to be at the fringe; for decades, just about every textbook claimed that normal human cells have 24 pairs of chromosomes. The correct number (or maybe I should say the currently-accepted number) is 23. This was even stated in textbooks that had accompanying photographs that clearly showed 23.
Consensus is a useful tool, but it's not infallible, and sometimes it goes spectacularly wrong.
'Without deviation from the norm, progress is not possible.'
I guess it's still not clear to me what the real use case is for Federated Wiki.