Hacker News new | past | comments | ask | show | jobs | submit login
Docs as code (2017) (writethedocs.org)
111 points by sumnole 73 days ago | hide | past | favorite | 99 comments



A subset of this idea is a hill I am willing to die on: the documentation for a codebase should live in the same repository as the codebase itself.

I'm talking about API documentation here - for both code-level APIs (how to use these functions and classes) as well as HTTP/JSON/GRPC/etc APIs that the codebase exposes to others.

If you keep the documentation in the same repo as the code you get so many benefits for free:

1. Automatic revision control. If you need to see documentation for a previous version it's right there in the repo history, visible under the release tag.

2. Documentation as part of code review: if a PR updates code but forgets to update the accompanying documentation you can catch that at review time.

3. You can run documentation unit tests - automated tests that check that the documentation at least mentions specific pieces of the code (discovered via introspection). I wrote about that a few years ago and it's been working great for me: https://simonwillison.net/2018/Jul/28/documentation-unit-tes...

4. Most important: your documentation can earn trust. Most documentation is out of date and everyone knows that, which means people default to not trusting documentation. If anyone who looks at the commit log can see that the documentation is being actively maintained alongside the code it documents they are far more likely to learn to trust it.

The exception to this rule for me is user-facing documentation describing how end users should use the features provided by the software. I'd ideally love to keep this in the repo too, but there are rational reasons not to - it might be maintained by the customer support team who may want to work in more of a CMS environment, for example.


Love your blog, but in this case I want to take a more nuanced, if not opposite, stance:

There are many things closely related to code, that shouldn't necessarily live in the same repository. First, we need a common understanding of what should live together in a repository. This is much like the discussion about mono vs. multi-repo. A good rule of thumb is that if it is branched together, it lives together.

Effective documentation is not only a strict API reference, and not something that can be generated from docstrings alone. It offers a high level overview to understand the problem being solved, the architecture of the software, and a general roadmap of how it is developed. Effective documentation should cover both backwards and forwards revisions and how those migrations should be handled.

But this is also true on a reference level. Reading the documentation of a specific function I want to know if something relevant happens to this function in the next revision. There is nothing worse than checking out documentation for current production revision 34.5 and follow best practice there only to discover I should have checked out revision 34.6 instead because best practice changes there. Specific revisions should be documented, but documentation should not be limited to a specific revision.

There is a scale of how closely other artifacts follow code revisions: Tests is mostly branched with code, and should probably live together. Documentation can sometimes be branched with code, some should and some shouldn't live together with code. Deployment code and configuration management must be able to deploy old and new code from the same code base, and is even less likely to benefit from living with it. Then there's application state and test data which is something else entirely.


If the deployment code needs to be able to ship different versions, I would keep that deployment code in a separate repository - with its documentation bundled there.

The other form of documentation that I am passionate about is documentation that lives in issues, and then linked to from commit messages.

The great thing about issues and issue comments is that they have a clear timestamp attached to them, and there is no expectation that they will be kept up-to-date in the future.

This makes them the ideal place to keep documentation about how the code evolved overtime, and the design decisions that were made along the way.


That is also true. But I realize the above comment could be more clear, perhaps with an example.

A well working project such as git has a Documentation directory in the same repository. That's good, but that documentation is far from enough. The most canonical documentation is the "Pro Git" book. That documentation describes not only how to use the software, how versions differ and how functionality has evolved, and the what the internal data structures look like.

That documentation does not live in the git repository, and that's a good thing, as it is not versioned in the same way. That probably goes for a lot, if not most, of good documentation out there. Insisting on keeping documentation in the main code repository would go against that.


Sure, there's a whole world of documentation that can live outside of the repository - anything written by people outside of the core development team such as tutorials, books etc.

Of course, the problem with documentation like that is that it goes out of date almost by its very nature. The great thing about documentation in the official repo is that it can come with a guarantee to be maintained in the future - if that documentation gets out-of-date it's a bug, and should be fixed.

External tutorials and books carry no such expectation.


Yes, but that Pro Git is developed by people outside the core development team (whatever that might mean) is beside the point. The point is that it is documentation that does not move in lockstep with the software. And most good documentation doesn't!

Had Hamano or Torvalds written Pro Git, it would still have been worse off had it been forced into the release schedule of git itself. The most useful documentation describes all versions of the software, and should be only loosely coupled with it. The same can be said for web sites for software which is also a type of documentation.

(This is, incidentally, also why over-reliance on docstrings and documentation testing makes good documentation hard. Certain examples need to be produced by older revisions of the software, especially when incompatibilities are what needs to be documented.)

Not all documentation is like that, of course, but when someone successfully insists on hard coupling documentation to code, that puts a hard limit on the type of documentation that will be written.

Despite how much having common release process for code, documentation, and deployment code tickles our nerd fancy, we should consider the opposite, as there can be benefits from a looser coupling. Never let smart stand in the way of good.

As perhaps is obvious, I too have fought the same hill many times, but from another perspective. Docstrings are good. Documentation in the code repository is good. But that is only a small subset of all documentation. Blessing that subset as canonical, or insisting that should be all there is, is a much too common mistake.


"Not all documentation is like that, of course, but when someone successfully insists on hard coupling documentation to code, that puts a hard limit on the type of documentation that will be written."

I don't think I've ever seen a project argue so passionately for "all documentation lives in the same repo as the code" that people were put off writing books or tutorials that didn't go in that repo.

I'm pretty sure we aren't actually disagreeing here. I'm fine with "unofficial" documentation - books, tutorials etc - that lives outside the repo. The official reference documentation that's updated to reflect changes made to the project should live alongside the project itself.


"Specific revisions should be documented, but documentation should not be limited to a specific revision."

It's unclear to me what this is trying to argue. So apologies if the below entirely misses your point.

Technical documentation that refers to a codebase should live and be maintained with it. Otherwise there will certainly be drift. It can obviously still happen but at least it is provable it shouldn't have.

Not maintaining accurate documents is like disabling tests because they don't pass. It's easy to do but not right.

A checked in codebase to me should be as current and correct as possible. That includes accurate documentation.

I've rarely seen documentation that isn't tied to the codebase being maintained/valued.


> "Specific revisions should be documented, but documentation should not be limited to a specific revision."

I think this boils down to “what do you do if you realize the documentation for v1.1 says that some feature does X when it actually does Y, but you’re already on version 2.2?”

If v1.1 docs are tied to the version tag in VCS, that incorrect statement cannot be fixed.

And it seems that fixing that forces you into backporting documentation even if you don’t release and maintain parallel versions of your software, which… kinda sucks.

To be honest, I much prefer docs in the repo, because it facilitates code review — a good patch touches some implementation, some tests, and some docs.

The downside when only the latest few versions of the software are supported and only the very latest docs are maintained is that historical docs will probably not be fixed.


> If v1.1 docs are tied to the version tag in VCS, that incorrect statement cannot be fixed.

It can be fixed, just checkout the branch, git cherry-pick the updated change set, or even write it by hand, then do a re-release. You should have a process for this anyway as there might be a critical bug in that code that needs to be fixed and a re-release must happen.

Of course git's handling of branches leave much to be desired (I want mercurial to come back just for it's branch handling) and so developers often forget they can do this and it isn't really that hard. It is tedious though, and you will eventually have dual maintenance where you have to write the same code twice just because the two branches have diverged - this shouldn't be an excuse not to do it though.


> a hill I am willing to die on: the documentation for a codebase should live in the same repository as the codebase itself.

I'm a big fan of this and treating documentation like a first class citizen.

There's also another benefit I think should be explicitly mentioned. It makes debugging, onboarding, and solving things much faster. We all know and have experienced the joke where you question who wrote this pile of garbage to find out that it was you all along. But at the core of this joke is the fact that we can't even remember what we ourselves did. So while things make sense at the time and might even seem obvious, that does not mean it'll continue to make sense nor that it'll be obvious to others. Especially to people who are onboarding into a new codebase.

Yes, documenting while you code takes "longer." But it only takes longer in the short run. It is much faster in the long run. The question you have to ask is if you're doing a sprint or a marathon. But then again there's very ill advised and self-contradictory advice on well known sites[0] and some companies perform back to back sprints. But I don't think people realize we're the ones creating our own messes. As anyone with anxiety will tell you, when you are rushing around it becomes easy to overlook small mistakes that will compound and only accumulate to make your anxiety worse than it was had you just slowed down in the first place. Creating a negative feedback loop where you only get more stressed to end up creating more problems than you solve.

There's times to move fast and break things, but if you don't also dedicate time to clean up your house will be filled with garbage and inhabited by a Lovecraftian entities made of spaghetti and duct tape.

[0] https://www.codecademy.com/resources/blog/what-is-a-sprint/


5. The documentation won't get lost in a botched wiki migration or something like that.

The documentation in the repo should not be restricted to relatively low-level stuff about APIs, it should also include design documents and cover the higher level concepts the developers use to make sense of the app and its APIs. I can't tell you how many times I've seen these concepts lost after the original developers move on, and then get violated in ways that make the app much harder to comprehend.


The "documentation" for Lemmy consists merely of an auto-generated JavaScript library API dump with no real explanation for what most of the endpoints do (and are often named ambiguously) or how the general flow of things is supposed to work, or even how to do common things like find a user's comments or posts (would you have guessed they're both under "/user"? Because they sure don't tell you that). Especially if you don't know Javascript you're going to have a bad time trying to use that API. And the devs defend it if you tell them this, claiming "it defines everything perfectly, it's so easy."

One time my company purchased a $5k commercial license for x264 and were met with "the code is the documentation." That set us back literal weeks.


  > the documentation for a codebase should live in the same repository as the codebase itself
This! 100%. Emphasis - codebase documentation. Not user guides.

After doing this a couple times, it's a no brainer. The benefits are significant, the effort minimal. Just add a docs dir at the project root and go to town.

The docs dir has some very interesting stuff - how to run parts of the api locally, tricks to make auth bearable for local development, commands that get new team members going at hyperspeed, what parts talk to what parts, which files are important for what flows, why some refactoring was attempted but abandoned, high level limitations and benchmarks, history on how some monstrosity came to be with some jokes sprinkled about.

everything just one cmd+shift+f away.


It works for user facing documentation too. There are actually pretty good reasons for this - e.g. you can use the test to autogenerate up to date screenshots with playwright to put in the documentation.

I'm pretty convinced that there should be a single source of truth for specifications, tests and documentation but I think the industry will take a while to catch up to this idea.

I built a testing library centered around this (same as my username) but it's hard to get people to stop writing unit tests :)


I actually built my own Playwright screenshotting software with this idea in mind too: https://shot-scraper.datasette.io/ - I wrote about using that for my project documentation here: https://simonwillison.net/2022/Oct/14/automating-screenshots...

Really it comes down to the team you are working with. If you have user-facing documentation authors who are happy with Markdown and Git you can probably get this to work.


That's very cool.

I think screenshotting needs to be integrated into the tests though - if a scenario involves a wizard or something, the latter screenshots will be dependent upon the actions in earlier steps.


The thing I really want is automated short video demos, but I've not found a good path to those yet.


The skeleton test-generating-docs example I built to exhibit the framework actually does this. Here's an example:

https://github.com/hitchdev/hitchstory/blob/master/examples/...

It records a video while running the test and then at the end runs it through FFMPEG to make a smaller, slowed down GIF that can be embedded in the autogenerated docs.

It's quite rudimentary though. I've been meaning to try making something more sophisticated and even potentially do additional automated video editing to inject text from the steps or something.


I would be very happy if as much developer documentation as possible was actually executed as part of the code.

For example, a diagram of how different services interact can go out of date. It would be better if there was a config file describing which services can be called, and this config file was used to generate firewall rules (for the case where dependencies on services are missing) and alert rules (for the case where unnecessary dependencies are never removed). Another example might be OpenAPI docs that you use to validate requests and responses.

I think that when you enforce a common source of truth behind both your docs and the functionality of your system, those docs can never become outdated. If you just shove docs into git without using them for anything they can easily rot away.


I have often wondered why Android's javadoc is so awful ... and thought, maybe precisely because its its embedded in such a large codebase it doesn't get updated for risk vs perceived benefit reasons (becuase of proximity of javadoc to code). Of course, it could be cultural or other things ... Perhaps the tooling sees changed sources, false positives for code changes, and there is a desire to eliminate this to help downstream consumers etc?


True. Confluence or whatever corp shitware is where technical documentation goes to die.


I’ll happily die on that hill with you


This is exactly what we do for the Factorio modding API docs. The docs are embedded inside the codebase, alongside the classes and methods that implement the functionality the docs describe.

So they are written and adjusted as the functionality is implemented, they can be reviwed alongside the code PRs. The CI builds the docs and makes sure there are no issues.

The format is a custom one, which is parsed and converted into JSON for language servers and into the API website. Not sure how you‘d test the docs content, but this parser is tested for sure.

Works great for us in general.


Developers are too close to the code to write effective documentation for it. They will go into great detail about things that nobody else cares about, while skipping important parts because to them it is obvious.

While it is possible to do okay anyway, it only happens if there is effort over time.

I'm convinced that the best thing to do it when someone asks you a question about your APIs the response should be to go (now that you are not so close to the code you can better to this) write the answer that person needs, and have them review it until they understand. You are not allowed to talk to that person except via new documentation, while they can pester you as much as they want until you make the documentation usable. It will still take some rounds, but if nobody is reading the documentation there is no point in writing it either.


Why not just put forth/use Literate Programming?

https://www-cs-faculty.stanford.edu/~knuth/lp.html


I've been working on a personal project for what I call "semiliterate programming" ;) because I think "Write a book about your code that happens to contain all of your code" is a bridge too far for nearly everyone.

So, I'm trying to find the place between Doxygen and full-blown literate programming. Encouraging disjoint prose documentation rather than parameter-by-parameter docs or chapter-by-chapter docs.

Doxygen Markdown Support made my system largely unnecessary. But, I still use mine because it has real-time preview, is based on https://casual-effects.com/markdeep/, and I personally don't care for classic Doxygen style documentation.

Meanwhile, this article sounds like it's about literate programming stuff. But, it's actually about using code-oriented tools to write documentation.


I'm finding that future-self really appreciates the effort past-self made to document things in book form:

https://github.com/WillAdams/gcodepreview/blob/main/gcodepre...

and

https://github.com/WillAdams/gcodepreview/blob/main/gcodepre...

for my current project.

Moreover, it seems to me that there would be a great deal of synergy in using Literate Programming techniques when:

>using code-oriented tools to write documentation.


Literate programming only works for small scripts and narrative documentation, not for e.g. API documentation.


not true:

"This book describes pbrt, a physically based rendering system based on the ray-tracing algorithm." ( https://www.pbr-book.org/3ed-2018/Introduction )

and:

"This book (including the chapter you’re reading now) is a long literate program." ( https://www.pbr-book.org/3ed-2018/Introduction/Literate_Prog... )


There is far more to the actual software of pbrt then what is described in the book and the book is huge. All the mechanics of the architecture are really something you can only find out by looking at the actual C++ files.


https://www.pbr-book.org/

>A method known as literate programming combines human-readable documentation and source code into a single reference that is specifically designed to aid comprehension.

Where are these C++ files which are not included in the text?


Where are these C++ files which are not included in the text?

https://github.com/mmp/pbrt-v2

https://github.com/mmp/pbrt-v3

https://github.com/mmp/pbrt-v4

Have you read these books and modified the source to pbrt? The books contain small fragments of source code and aren't close to a full program.


Interesting.

I guess that including those lines would have been problematic in terms of book length/page count?


And now imagine maintaining the code of PBR, and with more than 5 people working on it at the same time. Literate programming is great for reading, although I prefer having "real code" (which can normally be extracted, that's not a problem) to read too.

Btw. there is a newer - 2023 - edition available now: https://www.pbr-book.org/4ed/contents


I'm not saying it is for everyone, or for every project. My point was that "Literate programming only works for small scripts and narrative documentation" is far too wide-sweeping of a statement. There are plenty of non-trivial examples of its use, and it obviously "works" for some people.


It certainly does, but these "some" really are exceptions. Just try it for yourself, edit PBR and compare the experience to "just" editing the source code of PBR (yes, of course, not being used to something doesn't help). To be honest, I haven't looked at PBR for more than a decade so maybe I would succeed nowadays, but I doubt that.

I guess realising that literate programming is literally [sorry] writing a programming book should be enough of an argument to know that it isn't suitable for most people. And no, a white-paper is not a "programming book" suitable for most people to read ;)


OIC.

I guess that the books at:

https://www.goodreads.com/review/list/21394355-william-adams...

which include a typesetting system, a font design language, a 3D renderer, and an MP3 implementation qualify as "small scripts"? What is the threshold for such? TeX.web outputs some 20,619 lines of Pascal code for conversion to C and compiling.


"Doesn't work" does not mean that you can't write such books/programs/documentation. It means nobody (yes, yes, exceptions ...) can maintain such code. Look at Jupyter Notebooks - the most used literate programming environment nowadays - and their usual content. The main problem of documentation isn't solved by literate programming: how can you make sure that any relevant documentation has been updated, so that the docs are still in sync with the code.


I find that having the documentation in the same file and interactive with, and having the ability to include formulae and diagrams helps immeasurably in ensuring that the documentation is updated as the code changes.


This sounds logical, yet I have never seen that in my own code, the code of colleagues at work or other code. If the documentation is not somehow automatically tested against changes in the code, the documentation in the same file is almost always worse (as in "more wrong") than adding the changes in another document. I don't know why it's easier to remember or for others to check for updates in "special" files.


This is focused on people whose job it is to write documentation, but I think it applies generally. A previous company I worked at moved away from Read The Docs to Confluence and it was terrible. This decision was resisted by much of engineering because we recognised that disconnecting documentation from code would make it worse, it did.


did it happen because nontechnical stakeholders did not want to read the code?


I've seen pressure to move to Confluence in a different setting because some non-technical users did not want to use git, and (thanks to big company bureaucracy) some of them did not have access to GitHub at all.

That said, GitHub has okay(ish) ways to edit files right from the web UI now, so having to use git should not be a complete blocker any more.


I'm one of the co-founders of Mintlify and we're building a developer-centric documentation platform. The content is written in MDX and all managed through GitHub. Lately we've been building a web UI in conjunction with a GitHub integration so that non-technical folks can contribute easily - I think it's the best of both worlds (but I'm also biased). I do think docs-as-code would be hard if not for these more user-friendly UIs. Although we frequently chat with companies who initially say that our Github/code-centric setup is a blocker and they end up onboarding anyway.


An entertaining outcome here is that LLMs may render the docs vs code debate largely moot: as LLM coding capabilities increase and the cost per token plummets, it becomes increasingly possible to simply stop writing code at all, and instead write docs which are 'compiled' each time by a LLM to code which is then compiled normally and the code thrown away. The code can never get out of sync with the docs because it is always generated from the docs, in a way that previous brittle fragile complicated 'generate code from docs' approaches could only vaguely dream of.

To do bug fixes, one simply updates the docs to explain the new behavior and intentions, and perhaps include an example (ie. unit test) or a property. This is then reflected in the new version of the codebase - the codebase as a whole, not simply one function or module. So the global refactoring or rewrites happen automatically, simply from conditioning on the new docs as a whole.

This might sound breathtaking inefficient and expensive, but it's just the next step in the long progression from the raw machine ops to assembler to low-level languages like C or LLVM to high-level languages to docs/specifications... I'm sure at each step, the masters of the lower stage were horrified by the profligacy and waste of just throwing away the lower stage each time and redoing everything from scratch.


Reacting to this and the previous comment, I've pursued the idea of iterative human-computer interaction via GPT as perhaps not the ultimate solution, but an extant solution to the problem posed by Knuth in literate programming before we had the magic to do it in HCI where the spectrum extends from humans to machines and assumes that the humans are as tolerant of the machines as the machines are of the human (Postel's law), in a frame that Peter Pirolli described here:

https://www.efsa.europa.eu/sites/default/files/event/180918-...

Which is to say that with an iterative, human-computer interaction (HCI), that is back-ended by a GPT (API) algorithm which can learn from the conversation and perhaps be enhanced by RAG (retrieval augmented generation) of code AND documentation (AKA prompt engineering), results beyond the average intern-engineer pair are not easily achievable, but increasingly probable given how both humans and computers are learning iteratively as we interact with emergent technology.

The key is that realizing that the computer can generate code, but that code is going to be frequently bad, if not hallucinatory in its compilability and perhaps computability and therefore, the human MUST play a DevOps or SRE or tech writer role pairing with the computer to produce better code, faster and cheaper.

Subtract either the computer or the human and you wind up with the same old, same old. I think what we want is GPT-backed metaprogramming produce white box tests precisely because it can see into the design and prove the code works before the code is shared with the human.

I don't know about you, but I'd trust AI a lot further if anything it generated was provable BEFORE it reached my cursor, not after.

The same is true here today.

Why doesn't every GPT interaction on the planet, when it generates code, simply generate white box tests proving that the code "works" and produces "expected results" to reach consensus with the human in its "pairing"?

I'm still guessing. I've posed this question to every team I've interacted with since this emerged, which includes many names you'd recognize.

Not trivial, but increasingly straightforward given the tools and the talent.


> Why doesn't every GPT interaction on the planet, when it generates code, simply generate white box tests proving that the code "works" and produces "expected results" to reach consensus with the human in its "pairing"? I'm still guessing. I've posed this question to every team I've interacted with since this emerged, which includes many names you'd recognize.

My guess would be that it's simply rare in the training data to have white box tests right there next to the new snippet of code, rather than a lack of capability. Even when code does have tests, it's usually in other modules or source code files, written in separate passes, and not the next logical thing to write at any given point in an interactive chatbot assistant session. (Although Claude-3.5-sonnet seems to be getting there with its mania for refactoring & improvement...)

When I ask GPT-4 or Claude-3 to write down a bunch of examples and unit-test them and think of edge-cases, they are usually happy to oblige. For example, my latex2unicode.py mega-prompt is composed almost 100% of edge cases that GPT-4 came up with when I asked it to think of any confusing or uncertain LaTeX constructs: https://github.com/gwern/gwern.net/blob/f5a215157504008ddbc8... There's no reason they couldn't do this themselves and come up with autonomous test cases, run it in an environment, change the test cases and/or code, come up with a finalized test suite, and add that to existing code to enrich the sample. They just haven't yet.


I'd think that some combination of user-facing documentation (for the outside of the software) and requirements specs (for the inside of the software) oughta do the trick.


Not sure why you got downvoted, this is basically the logical conclusion of programming in some sense. Sure, generating code from docs via an LLM will be riddled with bugs, but it's not like the sloppy Python code some postdoc in a biology lab writes is much better. A lot of their code gets to be correct via trial and error anyway.

"Professional" programmers won't rely on this level of abstraction, but that's similar in principle to how professional programmers don't spend their time doing data analysis with Python & pandas. i.e. the programming is an incidental inconvenience for the research analyst or data scientist or whatever and being able to generate code by just writing english docs and specs makes it much easier.

The real issue is debuggability, and in particular knowing your code is "generally" correct and not overfit on whatever specs you provided. But we are discussing a tractable problem at this point.


I believe I remember reading that for merging branches to the Postgres project, you need to update the docs too in order to pass code review. A nice way of doing it, I thought. Pg has some great docs that I have been reading for some years.


This is such an ignorantly engineering centric perspective.

There is value in the larger organization being able to consume documentation and commenting on it and contributing to it.

There is conceptual value in some of these things, but I find it to be overstated and the downsides entirely ignored.

Most documentation systems have a version history.

And most documentation systems are far easier adopted by people other than engineers.

This is the equivalent of pointing out that figma has x, y, and z benefits and designers are fluent in it, so we should be using that for documentation.


> This is such an ignorantly engineering centric perspective.

I gather this is for technical documentation. For people who either are engineers or who work closely with engineers.

> There is value in the larger organization being able to consume documentation and commenting on it and contributing to it.

Agreed! One benefit of "docs as code" as this person calls it is that you can pile tools and metadata on top of it. People have created excellent tools to comment on and make suggestions to Git pull requests, for instance.

> And most documentation systems are far easier adopted by people other than engineers.

That really will depend. And no matter how good the software is, you're likely going to be locked into one corporate service provider. If you instead treat documentation like you do code, you'll have access to a wide variety of wholly interoperable UI alternatives with no threat of lock-in.


> And most documentation systems are far easier adopted by people other than engineers.

Whew, gonna have to have a hard disagree with you there. DaC is several times - nay, orders of magnitude - less complicated than standing up a S1000D, a DITA, or even a DocBook publishing system. For anyone.

Count the layers of configuration.

S1000D, you have to worry about issue (which has zero compatibility, and the Technical Steering says they have zero intention of releasing any guide to matching the different issues up), you have to worry about BREX, then you have to worry about bespoke DMC schemes, and then you have all the many ways the PDF or IETM build can get built out to Custom Solution X, since the TS/SGs offer absolutely bupkiss for guidance in that department (it's a publication specification that doesn't specify the publication, what can I say?). The DITA side's not a lot better: you have multiple DITA schemas, DTD customization, specialization, and you have a very very very diverse batch of DITA-OT versions to pick from, then on top of that you have the wide wide world of XSL interpreters, again with very little interplay. DocBook is probably the sanest of the bunch, here, but we're still going to be wrestling with external entities, profiles, XSL, and whether we're doing 4.X or 5 or whatever is in DBNG.

Not to mention all of this stuff costs money. Sometimes a whole lot of it. Last time I shopped round, just the reviewer per seat licenses for the S1000D system were 13k per seat per year, the writer seats were over 50k per year.

DaC, on the other hand, I want to get re-use and conditionals, so I get Visual Studio Code. I get Asciidoc. I get some extensions. I get gitlab, set up whatever actions I want to use, set up the build machine if I want one, and if I'm feeling adventurous, Antora. I'm literally writing an hour later. I'll probably spend more time explaining to the reviewers what a Pull Request is.


You might be interested in this project I came across a few months ago. This person is trying to build a S1000D style system based on Asciidoc.

https://github.com/lopsotronic/Ascii1000D


One of the main bullet points on the page is automated tests.

How do you write automated tests for documentation? Somehow require that blocks of code have documentation linked to them?


>> How do you write automated tests for documentation? Somehow require that blocks of code have documentation linked to them?

It could be tests to ensure documentation "builds" into all of the desired formats (e.g. web, pdf, ebooks, etc.) correctly.

Some programming languages have the idea of "documentation tests". In Rust, tests that are part of the documentation will run as part of the documentation build:

https://doc.rust-lang.org/rustdoc/write-documentation/docume...


If we treat specifications written in gherkin syntax [0] as documentation, then the cucumber framework can match a line or stanza of gherkin to a test function [1].

I admit that, while I write instructions for how to test specific functionality in gherkin, our company would not countenance publishing a non-narrative description of the system's behavior to our client's employees.

[0] https://www.manning.com/books/writing-great-specifications

  Given a work order xx
  and xx isExpedite
  When an operator prints the jobcard
  Then expect a label in the footer that says Expedite
[1] https://cucumber.io/docs/cucumber/step-definitions/?lang=jav...


Loads of things!

- Making sure example snippets still compile

- Checking if links are dead

- Check for standardized/proper formatting

Basically anything you'd want to enforce manually, try to enforce with CI.


I believe Rust and Python have the ability to run tests defined in docstrings.


Yes there are certain libraries that can handle this. Essentially asserting that functions documented are valid / return the proper results.

See https://docs.python.org/3/library/doctest.html#module-doctes... as an example.


I wrote about my way of doing that here: https://simonwillison.net/2018/Jul/28/documentation-unit-tes...

Short version: have tests that use introspection (listing functions and classes in a module, iterating over JSON API endpoints in the codebase etc) and then run regular expressions against your documentation searching for relevant headings or other pre-determined structures.


In matlab, where functions are "always" defined in their own file, there is a tool that checks if function documentations have all the right headers expected conventionally by matlab's documentation system (e.g. header, usage, examples, see-also links, etc). So this would be one example, I guess.

But I too would be interested to hear other people's insights who subscribe to this Docs as Code model.


Linters like Vale are pretty common for docs(-as-code).


> Somehow require that blocks of code have documentation linked to them

The Symfony (PHP) framework now does this. Code and config examples in the docs have automated regression tests.


Or just require file/function level comments. Requiring them to be helpful can be managed interpersonally, like someone was slacking off (they are)


Yeah that’s one way. And you can test that docs don’t link to code that does not exist.

Here are some other Good Ideas in a blog post I stumbled upon the other week: https://azdavis.net/posts/test-repo/


Personally I'm a fan of writing your first draft of documentation before writing the first line of code.



Or from the 1990s, "User manual as spec" - https://archive.org/details/rapiddevelopment00mcco/page/324/...

For example, the Excel Basic spec: https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...

> Then I sat down to write the Excel Basic spec, a huge document that grew to hundreds of pages. I think it was 500 pages by the time it was done. (“Waterfall,” you snicker; yeah yeah shut up.)

On the page above "user manual as spec" is "point of departure spec", which would be more like the iterative prototyping style.


I'll often do similar if I'm exposing a library... I usually want to work out the semantics and API for how to use the library before actually writing the interface.


Interestingly enough, my personal philosophy is to write all backend code as if it is a library for my future self.

That is to say, I want to be able to forget everything about a project and still have the resources I need to use the project code as if it were a black box consumable library.


This is a really interesting topic, and it has complexity I didn't consider until I became deeply involved in some similar systems.

For example, in code, you can generally use feature flags, A/B testing, etc. to show different things to different people quite flexibly, but (depending on how the documentation is actually published) you might have very different capabilities.


Lots of DaC shops use feature flags for their conditional content. "Conditional Content" is a huge hobbyhorse in component content, because you need conditionals to re-use chunks. How else could the chunk be made applicable to multiple people? In doculandia, it's more common to run into conditional handling that's inline with the document markup - ifdef/ifeval/ifndef in Asciidoc, some stuff in Jekyll, S1000D applic, DocBook profiles, DITA class/ditaval - but I'm not one hundred percent sold that's a solid practice. Moving conditionals into the document layer might have been a mistake. I dunno! I'd love to kick off a conversation.


MUI has always done this (since 2014), but goes one or two steps further than the bullet list at the beginning of the article. Most significantly, API documentation is generated from the code of the components being documented, so is always accurate and up to date.

https://mui.com


Love the concept, hate the article. The article doesn't actually say anything other than "store your docs in git" which... yeah, obviously. You don't need anyone to tell you that being able go to a snapshot of the docs as they were at the time of the commit/release you're looking at is a powerful feature.

But that's not really treating your docs as code, more like "storing your docs in the same place as your code." A system like Sphinx with autosummary and autodoc where the docs are generated from your code and human-readable details like examples are pulled from the relevant docstrings is very much docs as code. Same with FastAPI's automatic OpenAPI generation and automatic Swagger. Pulling the examples section for your functions directly from your tests, now that's docs as code.


Love this approach DBC Doc Before Code, very useful when working with Jr Developer


For me, this should be the end goal of the AI pair programming. I write documentation for APIs, data structures, etc., hand it to the AI, and it should crank out functional code that meets the requirements spelled out in the documentation.

We are close, but it's not there yet. I will always need to run a validation test against the code and eyeball to ensure it's not insane. But today, it's clear to me that if ChatGPT/copilot doesn't generate correct code quickly and easily from what I wrote, I didn't understand the problem and couldn't express it clearly.


Well...

I agree having a shared doc repo so anyone can commit changes/patches to docs, witch is well... Not much different than what most wikis offer already, and while useful wikis prove that's not enough to have good docs and little to no garbage in them...

But I will NEVER "host my docs" on someone else platform depending from their services (if you host code/docs as a mere repo, GH and alike are just mirror of something most dev have, if you use their features your workflow hardly depend on them) and I also never use MD as my default choice.


I've thought about this problem set for years, I've written many docs and technical books.

Version control is the best for documentation.

But maintaining it is hard - lots of great comments here.

For anyone interested, I'm working on https://hyperlint.com/ (disclaimer: bootstrapped founder). To help automate the toil around documentation.


From my limited interactions with document-intensive sectors (i.e. legal), I think they’re sorely lacking something like this.

When the same document is edited by two separate individuals and diverges, it is a nightmare to reconcile the two.

I truly wish (i.) Microsoft Word was a nicer format for VCS, or (ii.) Markdown was more suitable for “formal” legal texts and specifications — probably in that order (!)


In recomputer[0] I put the docs sources directly next to the relevant implementation, and also tested the examples.

For this to work well (not just like an API reference), the implementation itself had to be structured well.

[0] https://github.com/xixixao/recomputer


The landing page doesn't really explain anything, except a tangential quickstart into Github hosting.


Simonw[0] and 10 minutes later ChrisArchitect[1] mentioned another HN thread which it looks like dang __just__ merged. But that other post has a different link that is probably the intended one[2].

Though it is quite interesting to see how many comments are responding to (presumably) the title (and thus their interpretation of the title) and also didn't read each other's comments. Because when I hit reply, there were at least a dozen and I began writing this before dang merged.

[0] https://news.ycombinator.com/item?id=40920767

[1] https://news.ycombinator.com/item?id=40920876

[2] https://www.writethedocs.org/guide/docs-as-code/


I think regardless of where the original link linked, it's a very weird choice for a landing page.

"Follow these quickstart instructions!"

"Why, what are we quickstarting?"

"First, make a new repo!"


Yeah I thought it odd too. There's so little context. Feels like you're on the second page of some instruction set for who know what


Similar for the PlotAPI docs [1] which are all Jupyter notebooks!

[1] https://plotapi.com/docs/


The DaC debates grow increasingly grim as the overall employment situation worsens across industries. It's pretty hard to get people to react authentically, rather than see the discussion as an attack on how they do their jobs[0].

I'm going to head all this off at the pass, and say instead that DaC[1] is a technological tool for a limited number of business use cases. It's not a panacea, no more than XML publishing in a CCMS (component content management system) was seen as the Alpha and the Omega (and indeed still is by a whoooooole lot of people). I say this as a heartfelt believer in the DaC approach vs a big heavy XML approach.

Your first question - really, this should always be your first question - is, "how do people do their jobs today?". If you work in a broom factory, and the CAD guy reads word documents, the pubs guys use Framemaker, the reviews are in PDF, and the final delivery is a handful of PDF documents....well, using DaC is going to be a jump.

Now, is that jump worth it? Well, it might be. Your CAD guy might know his way around gitlens, your pubs folks probably have some experience with more complex publishing build systems, and, most important of all, you might have a change tempo that really recommends the faster-moving flows of DaC. If you're going the Asciidoc route, you could even try out some re-use via the `include` and `conditional` directives. But it also could be a disaster, with no one using VCS, no one planning out re-use properly[2], people passing reviews around in whatever format, and PDF builds hand-tooled each time. It's not something you dive into because it's what the cool kids are doing. Some places, maybe even most places legacy industry wise, it's just not going to work. Your task - if your job is consulting about such things - is to be able to read the room real fast, and recognize where it's a good fit, and where you might need to back off and point to a heavier solution.

[0] Big traditional XML publishing systems are also in the crosshairs, as they're quite frankly usuriously expensive, also writer teams have started noticing the annoying tendency of vendors to sell a big CCMS and then - once the content's migrated - completely disappearing, knowing that the costs of migration will keep you paying the bill basically forever.

[1] DaC defined as : lightweight markup (adoc, md, rst, etc), written/reviewed with a general-purpose text editor, where change/review/publish is handled on generic version control (git, hg, svn, etc), and the consumable "documents" are produced as part of a build system.

[2] Which crashes ANY CCMS, regardless of how expensive or how DaC-y it is.


Perhaps there's a market for a WYSIWYG markdown editor that reads/saves to git for non techies so they can keep README.md and similar files updated.


Also a market for putting a more doc-focused UI on git, integrating that too[1]. Pull Requests are basically gold for doc review, but the process of getting to the PR is something that always seems to need a bit of training. Nothing like the training that's needed to grok the basic graph-based change model, and how it's going to work for natural language (ish) documents, but that's a whole other kettle of fish.

[1] GitLens comes pretty close to this, however.


Bonus points for adding a WYSIWYG HTML editor that can work with rendered Markdown and then write its edits back out as Markdown, to Markdown (altho maybe in the worst cases falling back to embedding simple HTML).


I felt myself agreeing hard with this until I read it!

I thought it was gonna be all about ensuring your api documentation is closely coupled with your code. But it's more about using code tools to write docs.

I'm kinda two ways on it, doesn't it depend on what "docs" actually are? (I couldn't find a definition on the page). Wikipedia is a kind of documentation, but tieing it to version contril tools would massively restrict the number of people contributing and therefore the quality if the docs.

I dunno, maybe I'm missing the point.


As a pretty die-hard enthusiast for this approach - even for legacy, hard industries - let's take a close look at some of the limitations of this approach.

First, code is formal language, and docs are natural language. That's a lot of jargon; what does it mean? It means that the chunks inside of a piece of code are consistently significant; a method is a method, a function is a function. Chunks in a document are, woo boy, good luck with that one. XML doesn't even have line breaks normalized. Again, no matter what the XML priesthood natters about, it's natural language.

A consequence of this is that the units of change are much, much smaller in a repo of code vs a corpus of documents. This, well, is can be ok, but it also means that a PR in a docs as code arrangement can be frickin' terrifying. What this means, is that you have to have a pretty good handle on controlling the scope of change. Don't branch based on doc revisions, but rather on much more incremental change, like an engineering change order or a ticket number.

Your third problem is that the review format will never - can never - be completely equivalent to the deliverable. The build process will always stand in the way, because doing a full doc build for every read is too much overhead for basically any previewer or system on the planet. This is a hard stop for a lot of DaC adopters, as many crusty managers insist that the review format has to be IDENTICAL to the format as it's delivered. Of course, that means when you use things like CIRs (common information repositories) that you end up reviewing hundreds of thousands of books because an acronym changed....but I call 'em "crusty" for a reason. They're idiots.


It can be intimidating. And it probably isn't worth the investment for many projects. Especially not small ones. But https://www.amazon.com/gp/product/1541259335/ is a very compelling example of something in this vein.


Why hoard random sentences. Let go. Your time is more valuable.


I hoard "random sentences" because I see my time as valuable. Instead of processing the same thoughts over and over and concluding the same thing (or worse, the wrong thing and failing as I previously have), I just write things down. Recalling notes on my computer takes seconds at most, where I may have to think about something for minutes or hours to come to the same conclusion.


Why have a door? Remove it. You're going to enter anyway, your time is more valuable.

But seriously; what do you write to have this opinion? Just random, pointless drivel fit for Twitter?

Having some—any—kind of history has saved my ass a lot of work, and time in the process, by simply having either a restore point or earlier reference. Notes that were removed, but helped me remember something relevant or useful at the time, that I couldn't directly remember, but remembered having written at least something about.

Heck, even Office's history in documents have helped for restoring from errors caused by collaboration, or whatever else. And sure, I don't like Microsoft, and a lot of it is their fault for just shitty in-document synchronization, but a lot of it hasn't been too.


tl;dr: commit index.md to github repo and use github pages to host it.


Is documentation that important? Even when I think is excellent like postgres, I've only ever had a few pages of it. Which leads me to think, who's reading the other thousands (?) of pages?

I think the amount of effort you should put into documentation varies wildly on the scope of the project.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: