Ikiwiki is such a neat project that I'd like to use it, but I don't really have the need of a wiki. I mean an actual wiki where multiple people collaborate.
Because it is only me editing content, I use markdown with Hugo + rsync, that is close enough.
I've been procrastinating setting up my own blog for years because I want it to be as transferable as possible. I don't want it tied down to a specific platform; I want it to be easy to save and restore elsewhere.
I'm currently setting up a tiddlywiki instance from which static pages can be exported.
I just built my own SSG using Markdown for source documents with JSON metadata as "front matter". If I ever need to switch to something else, it should be pretty easy to auto-convert everything. I tried existing SSG tools (Hugo, Grav) but didn't like them. They felt too complicated, with many features I don't need or want.
I have a Tiddlywiki too for personal notes and stuff, and while the portability is great, authoring content is annoying since you pretty much need a custom application or web server to handle changes. That kills a lot of the cool factor IMO.
But I think TW would be great for documentation in a project. Instead of requiring a static site generator in my git repo to compile the docs (or just expect users to browse them via text editor), a self-contained TW would work out of the box. Small edits/contributions should also be relatively easy even without a browser plugin/app, since you could just CTRL+S and send a PR with your changes.
EDIT: although I guess reviewing a TW diff wouldn't be easy, at least in the typical Github/Gitlab UIs
Tiddlywiki's markup language makes portability a bit more difficult, so it's likely a good thing that you shifted to fully using Markdown. (You can use markdown in Tiddlywiki, but I think that adds an extra step to the authoring process (selecting the language for each tiddler).)
I moved away from using TW for personal stuff because its authoring interface was too disconnected from where I primarily take and consume notes (mobile). I use mostly Obsidian now + SSG for website generation.
If you use the Tiddlywiki NodeJS version, each tiddler is stored as a separate file, allowing them to be easily version controlled for a documentation project.
TiddlyWiki is a great project. I used it for personal/internal documentation at work via Tiddly Desktop.
What broke it for me is organizing things inside. ToC creation needs strict tagging, and there's no way to create a page hierarchy unless you create index pages yourself.
As the number of pages proliferate, the overhead for organization became accidentally quadratic, so I exported everything to Markdown and moved to Obsidian. It's closed source, but it works. So, it's an acceptable trade-off for me.
Don't let this discourage you though. Tiddly Wiki is great!
I’m using wordpress and the idea is you can export all your content to one big XML file and import it elsewhere. Because of the dominance of Wordpress there are many tools to convert this XML to different outputs.
However even on a simple Wordpress-to-Wordpress migration I did last year I somehow ‘lost’ around 20 images which I had to restore manually.
(Also notice the whole industry of for-pay ‘simple/super/whatnot-backup4wp’ plugins existing)
I handled a migration from WordPress last year, and was unimpressed with its export format. Well, with WordPress as a whole, frankly.
The export format was very clearly a half-baked implementation for migrating from WordPress to WordPress.
It didn’t include all content (and as far as I could easily tell, the missing content even seemed to be WordPress-shaped rather than from a plugin that stored things in a separate database table, which I could more understand missing though it should still have hooks plugins can implement to avoid that happening, no idea if it does or not).
You had to fetch all the media files completely separately.
And as far as non-WordPress interactions are concerned, the actual content markup format was something abominable that mixed old-style almost-HTML-but-line-breaks-are-awful-magic and new-style Gutenberg blocks, including within individual pages, and I believe the site wasn’t even three years old. Making the content suitable for importing to something else required, to begin with, copying/porting/applying WordPress’s wpautop function (which has the innocuous description “Replaces double line breaks with paragraph elements.”, but it’s way worse than that because it’s trying to avoid damaging HTML and you end up with a monstrosity that might almost be worse than Markdown in its fairly arbitrary interactions with HTML, and it’s doing it all with regular expressions, like even the new-style blocks, and it does mangle and destroy content, and not doing this stuff will mangle and destroy content, beyond just missing paragraph breaks).
And then there are the URLs… what should be the URL, since it’s extending RSS, actually isn’t the post’s URL. Sometimes it is, by coincidence. Other times, it isn’t, but that URL still works under WordPress, by coincidence, and maybe is in use. (For that matter, other URLs may be in use for linking to the content—so unless you duplicate a lot more of WordPress, you will hazard breaking links.) Other times, it isn’t and doesn’t work at all. Still other times, it should be, but WordPress matches some other piece of content at that URL first, and so the page is completely inaccessible (though if you know the post ID, which is normally not present when served, /?p=… may work), and all you can learn about it is the tantalising excerpt on a blog post list (because when you follow the link, it serves you a different page—and I’ve seen this on at least two other WordPress installations). The actual answer in the export is, as far as I can tell, completely undocumented and difficult to get at, requiring mixing a few unrelated things.
But half of this stuff is really more flaws in WordPress itself than its export model. Because its export format exposes a lot of the awfulness of WordPress, which is apparently a mountain of poor design decisions and technical debt. If I didn’t already detest it for its atrocious security model (I’ve had to help fix more than a few hacked WordPress sites over the years), this would have made me scorn WordPress utterly.
I felt the same way and ultimately settled on using a simple platform like hashnode, which is not self-hostable, as far as I know, but stores all your blog posts in markdown and syncs automatically. To your GitHub. Perhaps one day I'll move to self hosting.
I’m more and more in favor of Markdown every day. If aou are writing for writing itself and you don’t need any fancy formatting Markdown is great. I prefer pure Markdown, the one that works on most places.
I think the only thing weird is the buttons on the top like "edit" "recent changes" etc. Apart from that I would even notice it's a wiki and not a classic blog.
I used to have comments and a xapian index/engine on my website, something I wish every website would have. To share my experience, I eventually removed both.
The xapian index was both faster, more accurate and up to date than any public search engine. Nobody besides myself ever used it, often returning from a google site: search instead.
Things change if you have the wiki as a personal information repository and use search yourself, as the OP points out. I still have that, but I keep this private as I also index other private stuff and can't make the two separate.
And because I have a local mirror of the archive anyway, I'm often faster grepping than using search (it's just not big enough).
I also had fully open/anonymous comments. As you might expect, this gets spam almost instantly nowadays. I switched those behind a login wall, but I realized after a few years that I wouldn't create an account on a random website myself just to post a comment. For anything more involved than a quick comment I just dropped an email to the author[1].
Right now I fear that under GDPR those public comments would just be a liability.
[1] if the author is kind enough to do so instead of dropping a twitter handle...
It only worked for so long. I had spammers post with the hidden field after being targeted.
I then switched the field to a hashed simple time-based value, to which they responded to by just fetching the page to get the value, and posting it back.
The posted content was often in two categories: links with stuffed keywords, or some common framework exploit (generally fetching a remote resource to test for exploitability).
While marginally entertaining in the beginning, it's just a waste of time unless you want to create some form of engagement within a blog..
Because it is only me editing content, I use markdown with Hugo + rsync, that is close enough.
You can get Ikiwiki hosting here: https://www.branchable.com/