Hacker News new | past | comments | ask | show | jobs | submit login
Come back, c2.com, we still need you (c2.com)
350 points by joshuanapoli on May 15, 2023 | hide | past | favorite | 145 comments



I fixed it (rebooted the server).

Anecdotes: Ward (My dad) and I recently moved c2 off a server in a colo where it had lived for over a decade because the colo was closing. Now it lives in a cloud provider. It will live on. Eventually I envision It will get moved like the Agile Manifesto to a static site for long lived prosperity as a relic of the early internet.


Can I suggest making it a torrent? Good Internet relics deserve to live duplicated in random computers all over the place, just like the Internet itself.


Thank you; I'm glad that you're keeping c2 online.


Make sure it gets archived by an archive team not just hosted thanks


Recommendation of who to work with?


Archive Team, #archiveteam on the hackint irc.


Agreed, Archive Team is great and I'm sure would appreciate working with the site owners for once rather than 'everything is on fire' mode


I would also ask the Kiwix folk to make and host a Zimit - https://youzim.it/ will make one, but they also host the more popular stuff themselves at https://download.kiwix.org/zim/.


Maybe collaborate with Jason Scott? His textfiles.com and c2 wiki do seem philosophically interlinked in a way.


Now I’m curious. What happened to federated wiki?

I owe Ward stuff.


Fed wiki is still the current project. This was entirely about the hosting for the original wiki.


Here is one of the last snapshots before the C2 wiki was replaced by the awful JS version: https://web.archive.org/web/2014/http://c2.com/cgi/wiki?Welc...

I'm linking to a bit earlier in time because on web.archive.org every hyperlink takes you a bit further to the future, eventually (sometime in 2016) ending up on a redirect to the JS version.

Edit: Here is a mirror that is now functional again: https://kidneybone.com/c2/wiki/WelcomeVisitors. In particular, it has a working search page: https://kidneybone.com/c2/wiki/FindPage

See https://news.ycombinator.com/item?id=35952055 if you want to create your own mirror.


Thanks, this is really handy. The JS version does so not what I expect, and it certainly doesn't do what I want.


Cool URLs don't change.

https://www.w3.org/Provider/Style/URI

If you simply must redesign a site, make sure all the existing URLs still work and take someone visiting to the same equivalent in your new site.

If you're going to drop a site before its redesign is ready... what the hell are you doing?


Shameless plug. I'm working on a service to solve this very problem. I hate 404s, they're so frequent even my mum knows what a 404 Page Not Found error is.

Right now I'm focusing on simple, automated link checking in a closed free alpha with a few users, but the killer feature I want to implement by the end of this month is notifying users if they redesign their website and forget to set redirects to old content. [1]

I would love to have more alpha testers, and the plan is to make it available for free to open-source projects and documentation sites, like c2.com. If you join the waitlist, I'll let you straight in.

https://bernard.app

--

1: I am redoing my personal website, trying different static generators, and I cringe at the thought that I am completely unaware if I break someone's bookmarks, or the RSS feed URL changes and I lose the couple followers I have. Bernard exists first and foremost to solve this problem for me, and I hope it might be useful to others that care about this issue.


>If you simply must redesign a site, make sure all the existing URLs still work and take someone visiting to the same equivalent in your new site.

looking at you, vmware


Personally I consider what The Cybernetics Society pulled with regards to their new homepage / the botched (in numerous ways) move of their old homepage to https://archive.cybsoc.org/ far more atrocious, because with Microsoft and VMware it feels less tragic


I love c2 its wiki and would be very sad to see it gone. Im certain I can find some archives but this still feels like a loss.


(I've mentioned this on another thread here)

Here's a copy of the entirety of C2 from https://archive.org/details/c2.com-wiki_201501

And an implementation of a server to run it (search indexing and such) https://github.com/adamnew123456/wiki-server


the portland pattern repository was an invaluable resource for me when I was searching out programming material on the web in the early 2000s. hopefully whatever this is is just a hiccup in their service


Why do you love c2? If I may ask.


It is a frozen discussion from some of the people who we now hold in fairly high regard on a number of topics that are fairly core to software development.

It is like being able to go back and be a fly on the wall in https://en.wikipedia.org/wiki/Bohr–Einstein_debates# for physics (one can debate about the degree of significance of the topics and individuals - but I'm trying to get back a "this is where things where happening at the time.")

As an aside, there was a community-fork of c2 where part of the community was more interested in communities rather than code. That site is still available - http://meatballwiki.org/wiki/ ( http://meatballwiki.org/wiki/MeatballBackgrounder ) which has a lot of the same charm as c2.

The thing with other great debates - there aren't any transcripts of them. C2 is the best that we have of the early "trying to figure out is now called agile".


I noticed that all the links in these sites are CapitalizedWithoutSpaces. Why is that?


In the world before markdown and not wanting to embed html in the page - how do you designate that "this text" is a link to a page with the same name? Btw, it would work best if it was a regex (since this was perl CGI back in the day).

    if ($word =~ m/[:upper:]+[:lower:]+([:upper:]+[:lower:]+)+/)  {
        $word = "<a href=\"${somePrefix}${word}\">${word}</a>"
    }
or something to that effect.


For the record Clifford Adams and I invented the [[free link]] syntax that will plague humanity for a century. We did this on MeatballWiki specifically by request of and to solve the problem for Wikipedia when they were on UseModWiki.

It’s a small world. The more you know.

Also I have enjoyed listening to people complain about our [[creation]] for decades.


The camel-casing was offputting. I don't know if Wikipedia would be where it is today if you guys hadn't come up with the [[free link]] system that made everything far more readable. Thanks :)


What are the main complaints about your link syntax invention?


Why not [blah](bloo)? Why not __floo flah__? There’s always some other idea that people want to argue.

We had reasons. We already had clean syntax for [https://uri text description] and [https://uri] for numbered citations as footnotes and [#anchor] for named anchors that fit this format of square brackets for everything.

The double square brackets made it clear the inner tokens were not to be parsed at all but were the link itself. Plus they are fast to type and very easy to identify (they look like a [[button]]) and read cleanly as part of a sentence.


> world before markdown and not wanting to embed html in the page

Wdym


Unrestricted html leads to all sorts of nasty problems with user generated content. Things like <script> bringing in Javascript you don't want, or <iframe> pulling in content you don't want (and messing up the page) or in the days of table layout people just putting in </table>, or people with <a href="goatse.cx"> scattered about in links.

So, you limit the text that user generated content is allowed to use. And yet, you still need to figure out a way to allow links between pages within the site - but not allow the person to use an <a> tag.

The way c2 did it was with MixedCaseWords that were easy for a regex to pick out and create link targets to.

MixedCaseWords required no additional special characters to be used or worry about unmatched character pairs.


I liked CamelCase. The brackets are okay. They solve problems, but as you say (or at least imply) they create new problems that can make the code less elegant (IMO). As a user, the brackets are better, but as a programmer it be a headache. We invented, independently, network hyperlinks before the WWW and they were used in our lab, I don't remember how we did the links. The programmer is no longer with us, but I've messaged the person who had them done to see if he remembers. It only worked on our LAN, so amusing, but not as insanely compelling as the WWW. I'm curious to see if either CamelCase or [[]] was that natural a mechanism.


CamelCase is easy with a regex and you don't need to parse it. It completely avoids problems like how to handle

    [[[a link] with some more text and [[something else]]
Getting into parsing means more complex code and with that complexity comes for the possibility of bugs.

This was also at the time of the Cambrian explosion of Web 2.0 and user content. Lots of different sites took different approaches to handling it. The CamelCase link approach reads kind of clunky but is likely elegantly simple in implementation.

C2 was from the very early days and something that works, well, it works. Going to a full on sanitizer and larger set of allowed html leads forever chasing bugs and implementing features.

Compare also HN and the rather limited feature set. It is better to have something that works and move on rather than forever implementing features (often with backwards compatibility breaking cases).


Now it makes sense, thanks for the explanation.


Doing searches on google has ended me up on it so many times I ended up bookmarking it and searching it instead.


In the source:

   <noscript>
    <center>
      <b>notice</b>
      <p>javascript required to view this site</p>
      <b>why</b>
      <p>measured improvement in server performance</p>
      <p>awesome incremental search</p>
    </center>
  </noscript>


Indeed, that was a relatively recent change that I never understood. It had been working without JS for a long time before that. And page loads became slower, not faster, for me.


> And page loads became slower, not faster, for me.

They're not exclusive, smaller server loads doesn't mean the experience is better for the client.

For instance the server might just stop doing server-side template rendering. Instead it sends one static page, then the client requests and render the JSON for the article, or maybe the page only contains the JSON for the article instead of the rendered one.

Then the template rendering cost stops showing up on the server, because it's not performed on the client, and serialising to JSON is almost certainly cheaper than rendering an ad-hoc and half-assed template format.

But now the client has to perform a bunch of extra requests (to fetch the javascript and possibly the json) and it pays the cost for the template rendering, and that might be even halfer-assed, and more expensive, because it's out of sight of the server loads / logs.

The result is that the server has gone from, say, 80ms CPU to 40ms CPU per page (which definitely qualifies as a "measured improvement in server performance"), but the client now needs an additional 300ms to reach loading completion.


You are 100% right. Server was pegged at 100 and the family (me) were complaining about email not working (who runs their own mail server any more). We needed to shed load. Some things went to GitHub pages (https://github.com/WardCunningham/remodeling) and the rest turned into simple content loaded by that. There were some more complexities because of SSL and old OS but it works. Things have evolved since and will continue to evolve but expect the original wiki to remain accessible in some form.


If the wiki is not being updated anymore (or is it?) than why not just host the original one (not the current JS one) as a whole on github pages? Won't this save server costs?


Good description. And the site owner may have decided that trade-off works well for them - they are not incompetent or evil just because a few niche but noisy users have a moan.


At the same time, the users are rightfully complaining, their experience has been made noticeably worse in multiple ways to no benefit. Worse, they essentially get taunted.


> the users are rightfully complaining [because] their experience has been made noticeably worse in multiple ways to no benefit

Depends on your definition of rightfully. This is a ad-free, cost-free website. If they want to reduce server load or prioritize something other than non-JS experience, it’s their prerogative.

Especially when the complaint is “it’s slower on my client” and not “I can no longer access the data without JavaScript”

> Worse, they essentially get taunted.

Please explain?


It would be great if there was a read-only version of the site that can be crawled.

Much easier to do that supporting both HTTP and JS for both read and write.


Sometimes I feel like all 7 people who have JS disabled have an HN account so they can tell people that their site doesn't work with JS disabled


It just violates the KISS principle too much when the site in question is mostly presenting documents. I can see JS being useful if you have some sort of SPA app. But essentially docs with JS are a total no-go. Why do I need to fire up a 50MB binary to read a simple document again? Ahh, because reasons, I see...


Whether or not something is 'simple' depends on what type of simplicity you're optimizing for and your use case/requirements. Front-end js frameworks could make development much simpler than relying on vanilla code or back-end frameworks to accomplish the same tasks. Personally, I don't see the benefit of defining simplicity by how much work the browser has to do unless there's a specific, tangible requirement that demands it.

Accessibility isn't any more of a problem than it is with HTML. For screen readers, et al it's still something that needs to be deliberately incorporated into the document structure, and modern frameworks are more than capable of handling it. What about older browsers with crappy JS implementations? Front-end frameworks have tools designed to extend support to much older browsers with limited JS support. The percentage of people who have access to a web browser but can't use javascript applications is vanishingly small. When I've worked with people making publicly-funded tools for which accessibility must accommodate people with very limited technical means-- unhoused people, for example-- developers tend to skip the web interface altogether and go with SMS combined with telephone or in-person service. JS is not the barrier.

Obviously that 50MB binary is hyperbole, but the baseline Vue include is 16k. It's certainly easier to make make front-end JS heavy and unreliable than plain static HTML, but it's pretty easy to avoid it, and design that poor would probably fuck up plain-old HTML and CSS just as bad. The fact is that most users do appreciate the dynamic features and responsiveness that is only possible with js. If you don't, and want to exclude yourself from modern web functionality, then be my guest. I think it's pretty strange that you'd expect other people to go out of their way to accommodate your rather uncommon preferences to make an experience most users consider worse and doesn't work towards satisfying any tangible business needs.


I know this discussion is lost and the times have changed. However, two things I cant let stand: Using semantic HTML tags is all you need for proper accessibility of text documents. Calling that out as being specia attention for accessibility is false in my opinion. And yes, you just found one of these very rare people. I still spend 99% of my work day in tmux on a plain text console. I can, of course, fire up Firefox on a second machine I have next to me for just these problems, but it breaks my workflow not being able to read a text document with a text browser. Dont even bother telling me that this use case is no longer supported. I know that. That doesn't change my technical opinion that JS is way too much of a gun for documents.


> Calling that out as being specia attention for accessibility is false in my opinion.

I didn't. I said that accessibility is no more of a problem for js-fueled pages than HTML pages, which was not always true.

> Dont even bother telling me that this use case is no longer supported

Then don't say people are making websites wrong when by common practice you're using the wrong tools to interact with them.

> I know that. That doesn't change my technical opinion that JS is way too much of a gun for documents.

Like I said, feel free to opt out of the modern internet. It's your life. It's just flatly absurd to be in your position and accuse developers of having poor practices when they provide an objectively better experience for probably 9999 out of 10000 other users by using widely accepted, standard, reliable practices that often require less development time.


I am blind. I do rely on accessibility to interact with a computer. Yes, you could accuse me of deliberately avoiding the modern web, but I have my reasons. Primary reason is performance. Even though I feel like you are talking down to me from a pretty high horse, I still don't wish for you to ever experience how sluggish it feels trying to use the "modern web" with a screen reader on something like Windows. Don't even make me start about the hellhole that is Linux GUI accessibility. It was a nice ride once, before GNOME 3 and the elimination of CORBA killed most of the good work done by good people. Fact is, I am too used to a system which reacts promptly when I press a key to be able to switch to a modern browser by default. That would kill all my productivity. Yes, its a trade, but for now, having no JS engine by default is still way better then the alternatives.

Have a nice day, and enjoy your eye-sight.


I apologize for saying your use case is incorrect-- clearly, someone using a screen reader would have a totally legit if still comparatively uncommon use case. I have used the modern web in screen readers because I've developed modern web pages to be accessible, which by my estimation means doing more than using a WCAG scanner.

I still think it's ridiculous to say that developers are doing something wrong by using modern practices just because it doesn't fit your use case. You can have your opinion all you want to and I can have my opinion about it.


There’s no reason to think SPAs are in any way objectively better in terms of either UX or DX.

I can’t count the number of SPAs that manage to break basic browser functionality like links, back/forward navigation and scrolling. It’s insane.


> There’s no reason to think SPAs are in any way objectively better in terms of either UX or DX.

No. Not without knowing the use case, the requirements, the users, and all of that. Sometimes I don't need anything beyond a text editor for something I make. Use the right tool for the job.

> I can’t count the number of SPAs that manage to break basic browser functionality like links, back/forward navigation and scrolling. It’s insane.

Yes. With more powerful tools you can fuck more things up, more thoroughly than with less powerful tools. That's not a problem with tools, that's a problem with bad development and design. Assuming that the person who fucked it up that badly would have made a better experience with less powerful tools is almost certainly wrong.


Well, there is obviously a conflict in UI experience. I can very well see how back/forward breaks the idea of a web "app", because what would backward mean in a classical app, except for maybe undo? I tend to put web addresses in roughly these two categories: those that try to be an app, and those which just present a document. True, inline editing may blur the line, but thats how I try to see it. IOW, I am not mad at someone killing my back/forward buttons if these just dont make any sense in the context of the app they are providing. OTOH, I am pretty pissed if someone steps outside of classic HTML if all they are doing is basically prviding text/images.


Sure, if you’re building Figma, SPA all the way. If it’s a dashboard or a semi-static document, SPAs are misused and that’s when basic functionality gets replaced with JS, but typically in a broken fashion.


Have you tried https://www.brow.sh to preserve your workflow?


When you want to display a document, using a document format is clearly simpler than using a programming language.


Why use a web browser? Or even a web server? Opening up rcp so users could download it and view it locally would be much simpler.

Of course, some people might want additional functionality, and to facilitate that functionality, we have many technological tools at our disposal which make the process of implementing that functionality simpler than not using them. What you deem to be simple enough without being too simple is based on your use case and preferences.


What additional functionality is added by displaying a document using JavaScript?


What are the requirements and what do you plan on building into it? Annotations? Persistent highlighting? Foldable sections? In-line bookmarks? Citation generators for quotes? Content editing? Comments? Image carousels? Dynamic reading lists? Searching for other papers using selected text?


Makes sense if they don't need JavaScript for hn.


There are dozens of us! Dozens!

I mostly browse sites with JS disabled (with lots of exceptions of course) to get rid of those awful Euro cookie banners. Are those required in US now for some reason? My browser doesn't save the cookie that says they can save cookies, so they constantly prompt me.


uBlock Origin removes those successfully, and a lot of other annoyances.


There are actually 8 of us, not counting Ed Snowden, who once famously explained to Bart Gellman: "turn off the fucking scripts".

Almost all modern browser compromise is via JS.


FYI: There have been JS-based exploits in the past that were used to de-anonymize Tor users: https://lists.torproject.org/pipermail/tor-announce/2013-Aug...


uBlock Origin allows disabling JS by default, I use on my phone to preserve battery life


The C2 wiki was re-written and re-implemented as single page app, currently at http://fed.wiki.org/

It is an interesting change, to a more federated style.

I ended up doing a small project inspired by this change, at https://github.com/dexen/tlb


Complete with unnecessary page transitions, cascading loaders and a ton of layout shifting. Classic SPA success story.


I don't 100% disagree with the sentiment, but I think in this case the push-in style transitions fit well with a knowledge base wiki in which you (or at least, I) often drill down and pop out of topics.

Though... that particular implementation seems to not handle unwinding the stack very well. And as the classic web2 adage goes, if you think your animation is "just right", knock another 3rd off it at least.


IMO TiddlyWiki[1] is a much better implementation of this UI idea of bite-sized, heavily linked text (card catalog?) with multiple simultaneously visible entries. (No federation and a bizarre storage approach though.)

[1] https://tiddlywiki.com/ (haven’t looked at the homepage in years, the current one seems kind of awful and not really bite-sized unfortunately).


> It is an interesting change, to a more federated style.

what does "federated" mean in this context? do you mean decentralized segmented "peer to peer" storage, or something?

unfederated wikipedia is not helpful in defining federated:

Federated content is digital media content that is designed to be self-managing to support reporting and rights management in a peer-to-peer (P2P) network. Ex: Audio stored in a digital rights management (DRM) file format. https://en.wikipedia.org/wiki/Federated_content


That’s certainly.... a definition of a thing. That I haven’t seen once in my life. In any case, the relevant page is https://en.wikipedia.org/wiki/Federation_(information_techno... .

The idea is essentially to follow the way email (and partly the Web) works: users aren’t tied to one server (centralized), but neither are they required to each run their own one (peer-to-peer), instead there’s a narrower group of server operators who host users, but those users can cause the server software to connect to a server it doesn’t know about if they mention it explicitly.

Of course, if the protocol is too weak in allowing the operator to control those connections (in either direction), it will evolve informal means for that which will make all but the largest servers extremely unreliable, much as email did. On the other hand, the experience of Mastodon shows the dangers of operators exercising too much control (where e.g. Mozilla seems eager to defederate with any server that has anyone post anything even slightly offensive or objectionable to anybody whose opinion Mozilla considers valid).

The federated wiki idea as promoted by Ward seems to be to have the federation (network of servers) be able to browse each other’s pages—so far so Web—but then to allow each user to clone and edit any page anywhere, storing the clone on their own server. The original page isn’t affected, except I think there’s a provision for some sort of backlinking (referral spam? what’s that?). It doesn’t sound unreasonable, but I’m not sure it can support anything interesting either—for a large pool of collaborators you’d need a Torvalds-like full-time merge BDFL, and I haven’t even seen a discussion of pull requests or anything similar.


Works bad on mobile with some weird transitions. Can't believe C2 is fallen to this trap as well.


> The C2 wiki was re-written and re-implemented as single page app, currently at http://fed.wiki.org/

It seems empty. Or am I not grasping what this is?


I don't believe C2 was moved here. The Federated wiki seems to be Ward Cunningham's experiment - an answer to centralized wikis - also invited by Ward. It is interesting, but as far as I can tell not some kind of mirror of the content on C2. If you click "recent changes" a lot of stuff comes up, mostly about federated wiki.


holly cow this redisign is awfull, im not against federated content, but this inst a wiki at all


But all the old content seems to be gone (or all links to it are broken)


Sadly, I've had bad experiences with some Single Page Apps. Here are some problems I've had:

Linking to a specific topic.

Archiving the site.


Also routing breaking if you trigger back/forward too fast (looking at you, GitHub)


This is completely broken and unusable on latest iOS Safari.


If you find inspiration in converting a raft of long-term usable URLs into a single page usability clusterfuck one questions your motives and your craft.


Is there any way to see a running demo of tlb (or a real-life website)?


...that is fucking terrible.

Do they not know back button and tabs exist in browsers ?


I'm pretty sure I browsed the java version last week, or maybe even just 2-3 days ago.

Links to the JSON-formatted pages (thanks, fellow HNers!) don't seem to work either:

https://c2.com/wiki/remodel/pages/EgolessProgramming

https://proxy.c2.com/wiki/remodel/pages/

I really love(..d?) the "stream of consciousness" nature of the c2 discussions. It is easily among the greatest intellectual rabbit holes of oldschool internet. The extremely minimal, kind of robust formatting probably also contributed to why it was such a compelling read at times. Content over form, for sure.



The copy of the wiki on webarchive doesn't seem to work for me.

The C2 wiki is on of the foundational relics of the old web that need to be preserved for the future.


https://web.archive.org/web/20170430022841/http://wiki.c2.co...

I picked an arbitrary older date and it seems to be working for me (pre-redesign).


What you're linking to is already the "new" JS version. See https://news.ycombinator.com/item?id=35951027 for the last static version.


Never having heard of this site: what is/was it?


It's the original wiki, by Ward Cunningham. It has a lot of interesting discussions about software topics. I noticed the site was down because it has a page about the "Cobol Fallacy": that it is a misconception that software would be easier to create in natural language. I wanted to see how the (old) discussion on the topic compares with the present LLM mania/break-through.


There are people saying it was the original wiki, but let me spell out what that actually means.

Before the C2 WikiWikiWeb, few web sites had experimented with making it possible for its users to alter the site's contents. Granted, there were many sites with messaging forums you could post to, and there were places where you could add review or contribute new content entries, but not anything I can remember where you could edit the fabric of the site itself. Sites back then were 'published' by someone who owned them, and any contributions would go through a moderation process before they would be accepted and published, so there was no immediacy to such edits.

The C2 wiki wiki web allowed any user to immediately make an edit or create a new page, and it the site relied upon its persistent history to roll back changes that the community deemed destructive. I remember feeling quite excited by the concept because it was so alien at the time -- that someone was willing to allow anonymous users to put stuff on a site they were ultimately publishing.

The C2 WikiWikiWeb experiment is what ultimately lead to the creation of Wikipedia: an encyclopaedia that could be edited by the end users, hence the name. (In turn, the WikiWikiWeb was named from the Hawaiian word 'wikiwiki' meaning 'quick', which alluded to the lack of any moderation steps in its edits.)


Everything2 was another example of a user-driven site that allowed linking between pages. It's actually still alive today:

https://everything2.com/title/Y+combinator


There's a whole family of sites based on the same codebase of everything2 (which was a cousin of the /. codebase).

https://everything2.com/title/Everything+Engine

Aside from E2, it is likely that PerlMonks is the other still active site - https://www.perlmonks.org ( https://en.wikipedia.org/wiki/PerlMonks )

The others seem to have fallen into disrepair if they are still up at all.


h2g2.com is a another user-edited encyclopedia site which is almost, but not quite, entirely unlike e2. It launched about a year after e2, and about two years before wikipedia.


I thought that the Interpedia concept was closer to an origin of the Wikipedia idea.

https://en.m.wikipedia.org/wiki/Interpedia

I remember participating in the discussion.


Among other things, the home of the first Wiki, where people talked about software design and development since the 90s.



Check out the HN submissions from this domain: https://news.ycombinator.com/from?site=c2.com


the original wiki


software development wisdom from a time before cookie-cutters


From the source:

  <center>
    <b>notice</b>
    <p>javascript required to view this site</p>
    <b>why</b>
    <p>measured improvement in server performance</p>
    <p>awesome incremental search</p>
  </center>
It does load faster now that it doesn't display anything.


It occurs to me again that I need to figure out how entire websites can be downloaded and archived. Like Archive.org, but local.



https://www.httrack.com/ is a good option


Most of SPA websites today cannot be downloaded through httrack.


Yes SPAs will be next to impossible for a tool like this; I'm not sure how any tool could archive such a site tbh?


I use browsertrix-crawler[0] for crawling and it does well on JS heavy sites since it uses a real browser to request pages. Even has options to load browser profiles so you can crawl while being authenticated on sites.

[0] https://github.com/webrecorder/browsertrix-crawler


The entirety of the archive pre rework can be found at https://archive.org/details/c2.com-wiki_201501


There was a mirror at https://kidneybone.com/c2/wiki/ without the SPA stuff, but that seems to be 503'd right now. (And even if it wasn't, I don't think it has a root page; you'd need to type in one of the page URLs manually.)


Huh, my other pages don't have the same issue. I guess the search indexer must've died - I'll restart it in an hour or two.

FWIW, the snapshot of c2 that this runs off of is somewhat dated (https://archive.org/details/c2.com-wiki_201501), so the last ~7 years of updates after the move to the federated wiki aren't present.

Does anyone know where these original pages ended up in fedwiki after the migration?


Oh, it works now. Thanks!

For those reading the comments here, https://kidneybone.com/c2/wiki/CategoryCategory is a great place to start browsing.


The instructions for spinning up a read only mirror: https://github.com/adamnew123456/wiki-server and in particular the C2 archive from https://archive.org/details/c2.com-wiki_201501


I thought c2.com was in the middle of a redesign. Is it actually gone or just a technical glitch?



Using an SPA to reduce server load, when you could generate this content and use edge caches and have zero server load?

I'm more upset with the new design choices than the architecture choice. The old site was ugly in a retro-cool way. The new site is just ugly.


This is what I get

    page does not exist


C2 was an incredible place at its peak - I was lucky to have started my career while the community was still (only just) functional. Looking back, it's hard to over-state how formative my time on C2 was - not only did I learn a lot about pattern languages, and coding, but the Wiki idea and the way the community operated is something I still think about today.

C2 was a utopian vision of well-informed, kind, co-operative people working together in a radically open and egalitarian way. And it really did work, for a while. Unfortunately, the reasons why C2 ultimately died have been obscured by a well-meaning process of pruning that I think is meant to remove the "bad stuff" and leave only the "good stuff" for posterity. This is a shame, because the truth really is instructive - a few very prolific, toxic, borderline delusional people started dominating the wiki to the extent that more reasonable contributors just moved on. The C2 community started with an assumption that everyone could be reasoned with, and tried to handle the situation kindly and rationally. It was amazing to see the damage a very small number of people - basically just two - could do to a whole community of hundreds of well-meaning but naive people. It got to the point where there were pages dedicated to trying to think about the problems these people posed, with endless discussions about the paradox of tolerance and handling things through openness and kindness, and small factions arguing for permanent bans. Ultimately I think both badd actors _were_ banned, but by then it was too late - all the air was sucked out of the community. Watching the death of C2 unfold really darkened my view about the prospects of truly open societies, and deeply informed work that I've done on building communities since.

Today, nearly all signs of the way this devastation played out have been erased from history. If you search for the names of the malignant characters there are a few mentions here and there, but there's no way to piece together the true sequence of events. I think an important part of C2's story, and one that is more relevant today than ever, has been lost as a consequence. I'm sure Ward has the full edit history of the wiki around, and I think he should publish it, complete and unvarnished, so we can study it and learn from it.


By now it only says, “page does not exist”. :(



Ooh, is that tiddlywiki, or at least inspired by it?


Not tiddlywiki, but instead a complete reworking of the original Wiki, work done by Ward Cunningham himself.


wow. New site is fucking awful. http://fed.wiki.org

What an abomination. JavaScript is ruining the internet.


Is this a deliberate shutdown or just an outage?


I have visited the website but understood nothing. Can you give me any good links from this source to at least understand what this wiki is about? I do not see a search field and I do not remember other websites with this kind of design.


It’s the original wiki (and coined the term):

https://en.m.wikipedia.org/wiki/WikiWikiWeb

Has a bit of a focus on coding, with pages/discussions for various programming patterns and terms, like:

https://c2.com/xp/YouArentGonnaNeedIt.html


c2 is a special place. It's one of the earliest Wikis and contains a wealth of information, thoughts and history on programming.

You might find http://wiki.c2.com/?WelcomeVisitors a good place to start if you are curious.


Frankly, I don't find it particularly useful - almost everything is written in such a way that you have to know what the page is about to be able to understand. It's more like a collection of links than a wiki proper.


this is literally the site that the word 'wiki' was invented to describe; it is the embodiment of the platonic essence of 'a wiki proper'


1. That's not how it works. The first car is not the platonic essence of a car... 2. I guess it's more correct to say that it's too wiki and not enough pedia. I.e. a collection of links for a small community. But nobody thought of quality standards or introduction for new people. I.e. what function does it serve?


... or the alpha version that subsequent experiments with the idea improved upon.


some other wikis are certainly quite wonderful


It was a wiki that didn't try to separate the content from the community that created it, so it was more like a forum where people could more or less agree on how to summarize a thread.


In a way it's like most pages in corporate wikis I've encountered. Of course the topics and content here is light-years better.


That's weird. I spent some time going through the wiki last night. I was actually trying to find the original source for WikiWikiWeb, but I couldn't find a download anywhere.


What happened?


The web happened


I don't like the pop-up page ui.


I just see a blank page.


We do need c2.com. I was just thinking about wiki because it looks like I'm going to need to train a few people and I was wondering what Ward (not my dad) was running at c2.com I built a bunch of pages there and although perhaps the development ideas are not leading edge (or new?) any more, I at least like to look up the stuff that I wrote there. Also, I don't know who is using what out there, but crucial rules of thumb on that wiki are not followed and the continued fragility of systems is a result. That community was, let's say, idiosyncratic, but all good communities are, IMO. I am still very much hoping that c2.com comes live again, with improvements, and that stuff like federation gets hammered out. There are two things that I liked about wiki: 1) Its elegant, user friendly design/philosophy/ethos. :) 2) Its workingness. :( I am so glad I stumbled on this site. Bookmarking. If anybody cares, the query that got me here was: What is c2.com running? I just wanted to know the OS in case I wanted to send code along. Here is a design improvement I would hack in, just for the insanity of it: Add a Generative Pre-trained Transformer so you could type in ... aw nuts. You could get it to write something for you like this: Automated content generation: You can issue commands to the GPT backend to write about specific topics and have it generate content for the wiki page. This can be useful for quickly populating or updating pages with relevant information.

Collaborative editing: With the GPT backend, you can allow visitors to temporarily rewrite the wiki page to their liking. This enables collaborative editing, where multiple users can contribute and modify the content while ensuring a history of changes is maintained.

Intelligent refutation: You mentioned the ability to issue a disagreement and instruct the system to refute the previous content. By integrating GPT, you can prompt it to generate counter-arguments or provide alternative perspectives to foster a balanced discussion.

Personalization and anticipation: Over time, the GPT backend can learn from user interactions and get to know individuals personally. This can enable it to anticipate their preferences, interests, or even the types of content they are likely to contribute. Such personalization can enhance the user experience.

Aesthetic improvements: You can instruct the GPT backend to optimize or rewrite its own code to make the wiki front end more aesthetically pleasant. This may involve generating CSS styles, layout modifications, or even interactive design elements based on user preferences.

Intelligent linking: By leveraging the GPT backend's language generation capabilities, you can automate the process of hyperlinking important terms, concepts, or keywords in the wiki content. The system can identify relevant sources, explanations, or further reading materials and dynamically add hyperlinks for easy access to additional information.


I am consistently shocked that anyone ever found c2 insightful or helpful. Maybe it’s one of those things that was better in ye olden days.

Every c2 link I have ever followed was just an incoherent blob of text arguing with itself.

The TvTropes of the programming community, and just as much of a waste of time too.


It's useful as a survey of (perhaps somewhat old) opinions about a topic; it's pretty useful as a collection of pros/cons, and imo interesting for its historical context besides.


> an incoherent blob of text arguing with itself.

That's exactly why I find it valuable. Little to no pretense of looking good and a bunch of perspectives in little space. An occasionally useful starting point to get the lay of the land.


It's also almost perfectly what any long-running or stickied thread on any forum ever was, and that was usually a good thing.


> The TvTropes of the programming community

Not sure what you were expecting, but that was very much what some of us were looking for.


> an incoherent blob of text arguing with itself.

Kinda like HackerNews but from before someone invented modern threading and quoting, yeah?


A time waster's not always a bad thing; I spent lots of time browsing it back when I was a student intern because it seemed on topic" enough for me to not feel guilty.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: