For the more pragmatic ones you might want to look into Fossil SCM by Dr Richard Hipp, author of SQLite.
* Distributed VCS, tickets and wiki
* You can link artifacts together
* Single statically linked binary
* Works on Linux, Mac and Windows
* Not dependent on Javascript
* Easily themeable, looks good (important, this means you
can slap in the logo and colors from the company web
site and avoid lots of questions.)
* Easily hackable
(That said, I don't think the template system is aiming for any awards in the near future.)
That's quite some understatement, it freakin' runs on toasters (i.e. ~everywhere where you can run SQLite, the only hard requirement I'm aware of is there has to be a C compiler for the platform).
This is my favourite VCS. I can carry it on a USB. And it's a complete system, with it's own server, ticketing system, Wiki pages, and a very, very helpful timeline visualization[0]. And the entire program in a single file!
It seems like almost every project I work on could benefit from a general-purpose library for storing JSON documents along with a change history, including the change timestamp and author. Also the ability to merge two divergent branches and flag merges that need manual attention. I feel like this has been re-invented again and again. It's a bit like embedding git in your application---and maybe that's even the best way to go. It would sure be useful! So I wonder if this Wiki project will have some way to extract just that bit of functionality. The distributed nature makes me thing probably not, but who knows?
PouchDB uses the CouchDB sync protocol to sync to multiple servers (or other PouchDB databases). PouchDB is also planning on offering a lightweight server-side PouchDB that runs in Node.
I was hoping the next generation of wikis (especially as used by wikipedia) would introduce an underlying 'fact unit' [1] layer on which articles are built, effectively separating edit wars from verification wars.
[1] Not the same as a 'knol', which as defined by Google was not a unit. A real knol would be a single atomic fact.
The idea of "atomic" units of fact turns out to be really slippery. In the end you're just shifting consensus to questions about what things really are verifiable facts. And then, what is a valid statement you can build on those facts.
Wittgenstein and Russell tried to build philosophical atomism and it fell apart.
You're making a common mistake, which is to think of Wikipedia as a collection of facts. It's a collection of verifiable statements from reliable sources, or reasonable summaries thereof. "Reliable" is a collective judgment call, and even "verifiable" can be tricky sometimes if it's not an online resource or if we're summarizing.
So what you really want is a kind of source tracing. You want to know that when one statement is put into question, everything that followed from that statement is also questionable.
I don't think you are contradicting the parent. It's useful to acknowledge the slipperiness in some form.
With encyclopedic content, the consensus 'fact' is useful, plus the level and strength of general agreement over time is useful, plus the alternative 'facts' are useful.
Wikipedia is too much a collection of the most obstinate authors' latest opinions. It's saving grace is the 'History' tab, which adds a lot of value, at least for me.
Try listing the atomic facts (ignoring, for simplicity's sake, their underlying facts, or any references to authoritative sources) for the following statement. As an exercise that may possibly illuminate why that hasn't happened.
"37 minutes ago hilyan was hoping that the next generation of wikis (especially as used by wikipedia) would introduce an underlying 'fact unit' [1] layer on which articles are built, effectively separating edit wars from verification wars.
[1] Not the same as a 'knol', which as defined by Google was not a unit. A real knol would be a single atomic fact."
I will interpret a lack of responses as evidence for failure. ;)
Edit: So a couple of things about this exercise: yes, it took me more time (and space) than your phrase. On the other hand, I specified some extra information. Some ambiguities about your phrase needed to be explicit, so that's a win for a more formal language. Also, there's some more effort involved, so I could have the time to think more about what I say. It would be great to have an interactive tool to think and write in a more formal language, even if it is not meant to be compiled to assembler.
I have a side project that touches this tangentially (the code is made up, though!), but I can't find a lot of time for it :(
Does anybody know if something like this exist? Somethig like a wiki/REPL/mindmap sandbox for concepts.
I've been really interested in this area. Is your side project online? Even if it is just made up code, I'd love to see what you've been thinking about.
Cracking this problem could bring about an entirely new era of debate & communication.
For those of you trying to represent this, it will be important to explicitly represent context. A fact exists only in a context. The above statement is a good example. Another is, "Bill dreamed that he thought his dog was playing in the front yard." Which could be extended with "but it was actually in the back yard." and further extended with "When he woke up he found that his dog was licking his face."
Rather than creating more wiki, the solution to creating a better Wikipedia is to create a system with less wiki. A large part of Wikipedia’s problems stem directly the “article” format of most wiki pages (and Fereated Wiki maintains the same page metaphor). This gives presentation problems in that it forces the web page to look like a book page, but more importantly, allows any editor to veto any content on the page. This gives rise to real or imagined experts believing they “own” the page, and from that groupthink, harassment, bias naturally follow. See my post: http://newslines.org/blog/wikipedias-13-deadly-sins/
Sites like Larry Sanger's infobitt.com attempt to break down news stories into their facts, by letting contributors asses the factuality of each piece of information. However, I believe that fact checking current news stories is over-rated. Why do we need to fact-check the old story, when a more accurate news story will come along anyway? Again this stems from the wiki, formatting the page as an article, rather than, say, a stream of information.
There are other solutions that can mitigate problems that arise from using wikis, such as assigning anonymous editors to check content, and paying editors to contribute. I have implemented many of these on my site, Newslines (http://newslines.org) which crowdsources news, but uses each news event as the primary data type.
The absolutely vital importance of fact-checking current news stories, AKA "the old story", is in forming a measure of the reliability and veracity of the sources of this information, which can be factored in to future reporting.
I always have a little weird felling about local storage. I can save stuff on my computer but I don't know where it is going, so I can't back it up nor can I keep it if I format the computer.
To be clear, the use of local storage in this context is expressly intended to be temporary, to store a draft locally while editing, then you publish to a stable server.
But your comments about Local Storage are spot on. Use it as a cache, not a file system.
All browsers, that I know of, treat localstorage as part of your profile and store this inside your home directory somewhere - which is where all your other stuff is, so if you're not backing that up... I can't help you.
For example, Chrome stores it here:
on Windows: %LocalAppData%\Google\Chrome\User Data\Default\Local Storage
on Linux: ~/.config/google-chrome/Default/Local Storage/
Great Project!! The user experience on the page is quite nice--a bit like Trello crossed with GitHub; turning traditionally static wiki content into interactive content.
On a related note, I'm curious to hear what others think about the trend of sending data over-the-wire and rendering HTML on the client. I just started working with Meteor, coming from a traditional web development background, and I see major benefits regarding multi-device rendering and real-time updates. However, I'm still on the fence regarding this paradigm and continue building new projects with traditional server-side rendering and DOM manipulation.
Playing with federated wiki is very convincing that data over-the-wire is the future.
I'm curious to hear what others think about the trend of sending data over-the-wire and rendering HTML on the client.
Twitter did this for a while. They still do, but they switched back from a 100% API-driven model to a serverside first render, subsequent API-driven model. That allows them to render the page as quickly as possible (you don't have to wait for the DOM to be ready) while still allowing later partial renders.
I wonder if we used the same page. I just got a welcome page. When I input 'patterns' into the search box and hit return, nothing happened. When I clicked on 'Recent Changes' I see a gray bar reading 'activity,' with no contents.
It appears to be a broken test site to me.
> On a related note, I'm curious to hear what others think about the trend of sending data over-the-wire and rendering HTML on the client.
I think it's evil, given that it requires me to allow code execution in order to read content.
Nope, that just results in a vertically-divided blank white page, with the non-functional dark-grey bar below. It remains blank after minutes.
(This is probably because the wiki doesn't work without JavaScript, making it useless to anyone use lynx, links, elinks, emacs-w3m or NoScript, which is to say anyone concerned with security)
Mostly it's a knee-jerk reaction that all popoular things are usually just the tip of a huge iceberg of incremental improvements and other attempts at virtually the same stuff... But I did find something PARC-related:
The only thing that comes to mind is Douglas Engelbart mother of all demo. It features a collaborative edition of a distributed document. Other than that, it seems Cunningham was the sole soul behind wikis.
"A conventional wiki, says Mike Caulfield, is "a relentless consensus engine." A federated wiki may eventually yield consensus, but it promotes what Ward Cunningham calls a chorus of voices."
I don't see what's wrong with that, though. For the most parts, relentless consensus (by Wikipedians, who usually are smart folks) is reasonable information.
In a field with some form of rigorous proof underlying it* converging towards consensus is good, though I still enjoy reading the outliers (not the cranks, the ones who could be the next Pauli).
But imagine if you're trying to converge towards consensus about a historical event. Every historian has their own interpretation and different emphasis. The facts should be the same for all, but the reading of the event and its meaning isn't. And for that reason, support for tracking diverging views would be very useful.
I still enjoy reading the outliers (not the cranks, the ones who could be the next Pauli).
(emphasis added)
Yes. If everything operated on a pure consensus basis, progress would come to an end.
This doesn't even have to be at the fringe; for decades, just about every textbook claimed that normal human cells have 24 pairs of chromosomes. The correct number (or maybe I should say the currently-accepted number) is 23. This was even stated in textbooks that had accompanying photographs that clearly showed 23.
Consensus is a useful tool, but it's not infallible, and sometimes it goes spectacularly wrong.
Well, everyone can agree on the facts, and opinions are just facts about what people have said. So consensus can still be reached on what the opinions are.
But one of the big reasons wikis are unpopular (with those who aren't already fans) is the chorus of voices. The relentless-consensus part is seen as just a symptom of that larger disease.
I guess it's still not clear to me what the real use case is for Federated Wiki.