Hi HN! I’m Omar from Mutable.ai. We want to introduce Auto Wiki (
https://wiki.mutable.ai/), which lets you generate a Wiki-style website to document your codebase. Citations link to code, with clickable references to each line of code being discussed. Here are some examples of popular projects:
React: https://wiki.mutable.ai/facebook/react
Ollama https://wiki.mutable.ai/jmorganca/ollama
D3: https://wiki.mutable.ai/d3/d3
Terraform: https://wiki.mutable.ai/hashicorp/terraform
Bitcoin: https://wiki.mutable.ai/bitcoin/bitcoin
Mastodon: https://wiki.mutable.ai/mastodon/mastodon
Auto Wiki makes it easy to see at a high level what a codebase is doing and how the work is divided. In some cases we’ve identified entire obsolete sections of codebases by seeing a section for code that was no longer important. Auto Wiki relies on our citations system which cuts back on hallucinations. The citations link to a precise reference or definition which means the wiki generation is grounded on the basis of the code being cited rather than free form generation.
We’ve run Auto Wiki on the most popular 1,000 repos on GitHub. If you want us to generate a wiki of a public repo for you, just comment in this thread! The wikis take time to generate as we are still ramping up our capacity, but I’ll reply that we’ve launched the process and then come back with a link to your wiki when it’s ready.
For private repos, you can use our app (https://app.mutable.ai) to generate wikis. We also offer private deployments with our own model for enterprise customers; you can ping us at info@mutable.ai. Anyone that already has access to a repo through GitHub will be able to view the wiki, only the person generating the wikis needs to pay to create them. Pricing starts at $4 and ramps up by $2 increments depending on how large your repo is.
In an upcoming version of Auto Wiki, we’ll include other sources of information relevant to your code and generate architectural diagrams.
Please check out Auto Wiki and let us know your thoughts! Thank you!
> This provides a register-based virtual machine that executes the bytecode through simple opcodes.
Python's VM is stack-based, not register-based.
> The tiered interpreter in …/ceval.c can compile bytecode sequences into "traces" of optimized microoperations.
No such functionality exists in CPython, as far as I know.
> The dispatch loop switches on opcodes, calling functions to manipulate the operand stack. It implements stack manipulation with macros.
No it doesn't. If you look at the bytecode interpreter, it's full of plain old statements like `stack_pointer += 1;`.
> The tiered interpreter is entered from a label. It compiles the bytecode sequence into a trace of "micro-operations" stored in the code object. These micro-ops are then executed in a tight loop in the trace for faster interpretation.
As mentioned above, this seems to be a complete hallucination.
> During initialization, …/pylifecycle.c performs several important steps: [...] It creates the main interpreter object and thread
No, the code in this file creates an internal thread state object, corresponding to the already-running thread that calls it.
> References: Python/clinic/import.c.h The module implements finding and loading modules from the file system and cached bytecode.
This is kinda sorta technically correct, but the description never mentions the crucial fact that most of this C code only exists to bootstrap and support the real import machinery, which is written in Python, not C. (Also, the listed source file is the wrong one: it just contains auto-generated function wrappers, not the actual implementations.)
> Core data structure modules like …/arraymodule.c provide efficient implementations of homogeneous multidimensional arrays
Python's built-in array module provides only one-dimensional arrays.
And so on.