Hacker News new | past | comments | ask | show | jobs | submit login
Cog: Use Python in your source files to generate pieces of code (2021) (nedbatchelder.com)
158 points by goranmoomin 5 months ago | hide | past | favorite | 83 comments



Cog is great. I use it all the time in my personal projects. It really shines in C++.

Some uses I've found for it:

* Generate long literal lists, with less tedium and fewer bugs:

  snprintf(buf, sizeof buf,
      "%f %f %f %f\n"
      "%f %f %f %f\n"
      "%f %f %f %f\n"
      "%f %f %f %f\n"
      "%f %f %f %f\n",
      /* [[[cog
      for i in range(20):
          cog.outl(f"matrix[{i}]" + (',' if i != 19 else ''))
      ]]] */
      matrix[0],
      matrix[1],
      (...)
* Automatically instantiate classes from a set of files:

  /* [[[cog
    for path in glob('src/editor/Panel*.cc'):
      name = stem(path)
      instance = name[0].lower() + name[1:]
      outl(f'{name} {instance};')
  ]]] */
      PanelInspector inspector;
      PanelGame game;
      PanelTextures textures;
      PanelModules modules;
      PanelProject project;
      PanelInput input;
      PanelPerformance performance;
  // [[[end]]]
* Given a list of Python strings, generate a C++ enum, and functions to convert it to and from a string.

* Parse a C++ source file for struct definitions, and generate code to debug-dump them, inspect them in a GUI, save/load them... alright, Clang might be a better tool for this, but Cog will do in a pinch.

All this is stuff that a more expressive language could do without external tools, but even in such a language, you won't usually see what a macro is being expanded to without some sort of tooling. Cog puts it right there in the file for you, which I often appreciate.


Great examples.

Is source generation becoming "trendy"? Microsoft have recently added more of it to C#, in order to support their AOT efforts. It's a bit infrastructure-heavy as it seems to be intended for point support for specific things, rather than a general purpose "write code to generate code" tool. But I wonder if there would be a market for such a thing. The architecture currently does not allow you to edit user source code, just add more code. I.e. it's not a preprocessor, it just has readonly access to the AST.

e.g. the regex generator https://learn.microsoft.com/en-us/dotnet/standard/base-types...

(Reference blog post https://andrewlock.net/creating-a-source-generator-part-1-cr... )


I expect wider convergence on better build tools like CMake has made it relatively easier to perform custom build time logic like code generation without disrupting consumers of projects by changing or otherwise adding complexity to the build process.


In contrast to this, "comptime" is one of Zig's innovations which completely removes the need for preprocessors and code generators without leaving language you're programming in.


I look over at Zig with envy every time I have to write and, inevitably, look up how to write a Rust macro.


Will Rust ever get Zig-style comptime support? Is that something that’s in the pipeline?


Surprised it's not more well-known but Visual Studio (MSBuild really) has had support for this kind of code generation for a while, it's called T4 (probably in reference to the GNU M4 utility).

It can be used to generate files on save as well, so as the template is edited, it updates the output files automatically.

[0] https://learn.microsoft.com/en-us/visualstudio/modeling/desi...


Just FYI, m4 was in the original Unix in 1977, years before the GNU project began. (It was invented by Kernighan and Ritchie.)

Thanks for the VS tip.


I needed code generation for C# and looked at T4, decided it was less capable than M4 (and obviously less portable, a big plus for Microsoft in tying you into their environment) and went with python instead. That was a while back, maybe T4 has improved but I still wouldn't use it.


I get it but I don't get it. I have seen cog before and respect ned so I assume there are sound uses, but the demo versions are trivial and Instruggle to see how I can use it for more complex cases.

My thoughts are the python code will be most effective if it can read config or access chunks of data outside the source (ie auto-generating fetter setter style stuff). I am trying to see how much is dangerous and how much is trivial or better handled pre-commit


Having written something similar ( https://pypi.org/project/gcgen/ ), it is sometimes hard to provide examples without being overly specific.

However, some "meta" examples: * If you have an authoritative source - a protobuf/openai/some other structured specification, then this is helpful (if there isn't a specific generator already).

* Whenever you're knee-deep in meta-classes in Python, generating code from annotations in Java or similar - then this might actually be better - the code will likely be simpler and therefore easier for teammates to maintain as well.


As a concrete example, I would definitely use Cog for generating this kind of code: https://github.com/libjxl/libjxl/blob/main/lib/jxl/dct_scale...

Note that the current source code already contains a Python code used to generate them. The code is there for future uses, but it would be too much to have an additional build step and Python dependency just for that particular file, considering that this file will rarely change. Cog would be very useful in such situations.


It would be to much to have an additional build step and Python dependency for a file that never changes, but not too much to add cog and python as dependencies and then slow down the build for all users?


Cog can be run on demand, unlike build steps. Only some subset of developers that would touch those files need to have Python installed.


> My thoughts are the python code will be most effective if it can read config or access chunks of data outside the source

Agreed. I've used jinja2 quite a bit for templating C with structured data. It is very effective. Ned writes that he also considered templating engines but was not satisfied when trying to implement more complex code generation inline. I feel like I understand Ned's thought process here but I strongly disagree with his conclusion unless it's for personal use only. Cog is subject to the lisp curse.


For my work I often run a lot of queries with a similar structure, but not repetitive enough that I can just have one query. What I normally do is paste the query template in a Python REPL and write some code to generate all my queries, which I then output to a file that I can call before ending work and leave running through the night. This would allow me to do the same thing, but in a more reproducible way that I can, for instance, share with my teammates.


I have a

  target docs:
      ${VENV_PYTHON} -m cog -r ./README.md
and readme.md has cog statements in it which fully exercise my CLI’s help documentation, and every subcommand’s help, and prints them out as source blocks.


Related:

Cog: Use pieces of Python code as generators in your source files - https://news.ycombinator.com/item?id=19566384 - April 2019 (72 comments)


> If dedent is True, then common initial white space is removed from the lines in sOut before adding them to the output. If trimblanklines is True, then an initial and trailing blank line are removed from sOut before adding them to the output. Together, these option arguments make it easier to use multi-line strings, and they only are useful for multi-line strings

I've recently learnt Julia and known how multiline strings in Julia works: normal " allow multiline strings, while """ will automatically remove the first blank line and all indents.


It's quite common to use scripts to generate code.

My experience is that this is best kept as standalone scripts executed only when needed with the resulting generated code committed to source control (scripts are also of course under source control).

Code generated on the fly during the build process and never committed to source control is a pain, IMHO.


I really don't want any generated code committed to my source control!

By its nature, this generated code is repetitive. People will go and manually change a thing in the generated code in ten thousand places. Then, when I have to make a change, I have to:

1. Either manually change my thing in ten thousand places (which is a pain)

2. Or use the code generation for my change and also make sure I don't break the previous peoples' manual changes. I don't think this is possible.

> Code generated on the fly during the build process and never committed to source control is a pain, IMHO.

Maybe a pain to debug, but much less of a pain to maintain/change!


Why would you allow manual changes to autogenerated code? Isn't the whole point not to manually change it?

Autogenerated code is not a pain to maintain and change as long as you have a sensible process...


How do you prevent people from manually changing any code which is checked in? Have a CI step run the auto-generation and fail if there's a diff?


CI step, code review. Ideally those steps gate submission to master repo.

But even just a big telling-off and revert after the fact (as said those files should have a big warning as header so there is no excuse) if those good practices are not followed as commits should at the very least be monitored manually, and they are obviously visible, anyway.


Cog can add a checksum to the generated output: https://cog.readthedocs.io/en/latest/running.html#checksumme...


I see some replies pushing back against committing generating source code; but I generally agree with you. There's a principle to avoid committing generated code, but sometimes it makes sense.

A few examples I had recently:

- Parts of a website are generated by converting from Markdown to HTML using a Rust script. The Markdown side rarely changes, by committing the generated HTML, it allows designers to clone the website and work on it without having to install a Rust toolchain.

- API Clients/type declarations for a given language are generated. Having them committed allows to send GitHub/GitLab links to the source when discussing code; it also allows to view a diff of the output when updating the code generator.

- Flash files used for an emulator test suite are generated, but also committed; since setting up a compiler locally is pretty hard (similar to the first point)

So my use cases mostly arise when I need to reduce friction for people joining.

For more restricted projects, I avoid committing generated code; but I still tend to reach to code generation over simpler preprocessors occasionally. The main reason is that dumping the generated code to a file instead of doing it all in memory provides better support for IDEs and debuggers. It depends on the language of course, in my case it was Haxe where you get way better support with code gen rather than built-in macros.

Of course there are some downsides such as keeping track of dependencies/refreshing appropriately and it requires more tooling; but sometimes it makes sense. Preventing people from editing generated code is a non-issue in practice: there are warnings indicating that the code was generated, where it was generated from, and how to generate it yourself if needed.


Committed generated code can also be a pain. Now you invite manual changes to the generated code, which is probably a pain to maintain if you also want to maintain and reuse the generation script in parallel.


No, committed generated code does not invite manual changes. This should obviously be prevented and part of your standard software development process and guidelines. That's why autogenerated source files usually have a big warning in their headers, and you would kill any manual changes in code review with a telling off of the dev, anyway. Very simple to enforce and maintain.

At least you have the whole compiled code base under source control, you can guarantee that you have access to the exact source files used for previous builds, and debugging is generally easier. You have the generated code subject to your normal code review process and everyone can easily inspect it. You also have a separation of concerns and source files, embedding Python scripts into your, say, C files is conplexifying everything for little, if any, benefits.


You can use the "cog --check" command in CI to prevent this from happening.

On every commit, your tests can rerun the code generator and confirm that it makes no changes to the code. If it does, the tests fail.

That way if anyone manually edits generated code (which they should not be doing) it will be caught by CI.


I never expected to see this here! I used this in my earlier programming days. A great idea that sadly never caught on enough to allow for tooling integration with popular editors/IDEs. I still love the concept and the source code is super readable. Makes me want to revisit it…


I've read technical books that say the code samples are actually executed to avoid typos. I wonder if you could use cog for that. Probably there are LaTeX packages that can do it, but what if I'm writing in markdown?


This is a perfect use case for org[1] and org-babel[2] and actually one of the reasons I've switched from vim to emacs.

I wonder whether there's something similar that executes code fences in markdown? After a cursory search I found markdown-exec[3], which seems to be a more editor-agnostic solution, although I don't know whether it can compete with org-babel in terms of language support and being maintained.

[1] https://orgmode.org/

[2] https://orgmode.org/worg/org-contrib/babel/

[3] https://github.com/pawamoy/markdown-exec


> I wonder whether there's something similar that executes code fences in markdown?

I wrote a script to do this called panpipe: https://github.com/warbo/panpipe

It uses Pandoc, so it supports many formats (I've personally used it with Markdown code fences and with HTML <code> elements). The idea is pretty simple and powerful: if a code block has an attribute/annotation like `pipe="foo"` then that value is executed as a shell command, and the contents of the code block gets piped through it and replaced by the stdout.

This supports running basically all programming languages, by giving their interpreter as the value of `pipe`. I use this for my web site ( chriswarbo.net ) to execute code in PHP, Haskell, Racket, Bash, Coq, etc. It can also run things indirectly, e.g. if you want to compile and run C you could do something like this:

  Here is my C program:

  ```{pipe="tee myfile.c"}
  int main() etc etc.
  ```

  Here is its output:

  ```{pipe="sh"}
  gcc -o myfile myfile.c 1>&2 || exit 1
  ./myfile
  ```
A more scalable alternative, for documents which will run many such snippets, is to write a helper script to use as our command, e.g.

  ```{pipe="cat > runC && chmod +x runC"}
  #!/usr/bin/env bash
  set -e
  cat > myfile.c
  gcc -o myfile myfile.c 1>&2
  ./myfile
  ```

  Here is the output of my C program:

  ```{pipe="./runC"}
  int main() etc. etc.
  ```
I've also written a companion script called panhandle: https://github.com/warbo/panhandle

Panhandle allows the contents of a code block (annotated with class `unwrap`) to be spliced into its containing document. This is useless on its own (since we could just write that part of the document directly, rather than putting it inside a code block); but it allows us to generate markup using panpipe, and insert it into the document. For example, the following is a HTML page containing a <p> element and a <code> element; panpipe can execute the contents of the <code> element, which generates HTML for an <img> element, but that HTML is stuck as the text of a <code> element. Panhandle lets us "unwrap" it, to become a real <img> element in the document:

  <p>Here is a checkerboard:</p>

  <code class="unwrap" pipe="sh | pandoc -f html -t json">
   printf '&lt;img src="data:image/png;base64,'
   {
    echo "P1 101 100"
    for N in $(seq 10100); do echo $(( N % 2 )); done
   } > checker.pbm
   convert checker.pbm checker.png
   base64 -w0 < checker.png
   printf '" /&gt;'
  </code>
Note that panhandle needs to parse the content of a code block in order to unwrap it; to keep things simple and unambiguous it only supports Pandoc JSON format (a serialisation of Pandoc's internal AST). Since the above shell code generates HTML, we pipe it through `pandoc -f html -t json` to get the required JSON format.

This whole approach is described at http://www.chriswarbo.net/projects/activecode


Oh wow that looks really neat. I'm using Pandoc a lot anyways, so this might fit nicely into my workload. Thanks for sharing!


Pandoc even poses codeblock execution as an exercise in their custom-filter documentation: https://pandoc.org/filters.html The python package panflute is also really nice if you don't want to play around with haskell or the AST JSON directly.


This approach is sometimes referred to as literate programming.

Emacs's org-babel package implements it for a variety of languages but only within org-mode files (.org).

You might be able to find a 3rd party package/script that works for you or you might find it easy to write/modify existing scripts depending on your particular use case.


I've created Codebraid (https://codebraid.org/) for writing in Markdown. It makes inline code and code blocks executable, and includes support for Jupyter kernels. It uses Pandoc internally to parse the Markdown. There's also a VS Code extension that provides a live preview with scroll sync.

For the LaTeX case, I created PythonTeX (https://github.com/gpoore/pythontex) years ago, but haven't done much with it recently.


I don't think that would fall within the scope of a utility like cog.

I wrote a minimal tool long back which I used to extract parts of code from my test suite as json which I could then use within my documentation generator to embed these snippets as example.

The examples then are guaranteed to be correct because they are quite literally pulled from the project's test suite. The util is written i n ts but should work with any lang. that supports C or Python style comments.

https://github.com/lorefnon/snippet-collector


I discovered Cog a couple of years ago and I use it on dozens of my own projects now. It's a really neat thing to have around.

One trick I use a lot is including the output of the "--help" command in the README files for CLI tools that I release.

Here's an example: https://github.com/simonw/ospeak/blob/main/README.md#ospeak-...

View source on that file and you'll see this:

    ## ospeak --help

    <!-- [[[cog
    import cog
    from ospeak import cli
    from click.testing import CliRunner
    runner = CliRunner()
    result = runner.invoke(cli.cli, ["--help"])
    help = result.output.replace("Usage: cli", "Usage: ospeak")
    cog.out(
        "```\n{}\n```".format(help)
    )
    ]]] -->
    ```
    Usage: ospeak [OPTIONS] [TEXT]

      CLI tool for running text through OpenAI Text to speech

      Set the OPENAI_API_KEY environment variable to your OpenAI API key to avoid
      using the --token option every time.

      Example usage:

          ospeak "Everyone deserves a pelican" --voice alloy -x 1.5

    Options:
      --version                       Show the version and exit.
      -v, --voice [alloy|echo|fable|onyx|nova|shimmer|all]
                                      Voice to use
      -o, --output FILE               Save audio to this file on disk
      -x, --speed FLOAT RANGE         Speed of the voice  [0.25<=x<=4.0]
      -s, --speak                     Speak the text even when saving to a file
      --token TEXT                    OpenAI API key
      --help                          Show this message and exit.

    ```
    <!-- [[[end]]] -->
I can now run "cog -r README.md" any time I want to update that block of text in the README.

I also have this line in my GitHub Actions test.yml workflow: https://github.com/simonw/ospeak/blob/73a449d3f006737e72e7a0...

    cog --check README.md
That way my tests will fail if I make a change to the help and forget to update the README.

I like reflecting my help output in my documentation mainly because it's a useful way to review if the help makes sense without having to actually run the command.

I maintain these much larger pages using Cog in a similar way:

- https://docs.datasette.io/en/stable/cli-reference.html

- https://sqlite-utils.datasette.io/en/stable/cli-reference.ht...

- https://llm.datasette.io/en/stable/help.html

I wrote a bit more about this pattern here: https://til.simonwillison.net/python/cog-to-update-help-in-r...


+1 for enumerating CLI options in markdown files. I commonly forget to update my READMEs when making changes to the interface and cog has been a really helpful utility to take care of this for me. I include this pattern in pretty much every tool I make nowadays, honestly probably inspired by your blog post on the topic :)

This year I also realized I could use it for my Advent of Code repository to sync my yearly progress tracker with a summary table in the base directory's README: https://github.com/sco1/adventofcode/commit/9b14483fc3d9ef58... .


Ok, just wanted to say I had no idea click.CLIRunner was a thing - going to try that today for my own test suite

but this shows me a style Indid wonder about - cog is importing other python code (inside the package?) to do its thing. Makes more sense than attaching to C++ code I think


I built/use something similar using S7 scheme inspired by NaughtyDog's use of PLT Scheme/Rackey.

I really like that it's easy to embed S7 into a single binary that's easy to build and distribute.

Sadly, I mostly built it for my own usage and havent documented it well.

https://github.com/ScatteredRay/s7pp


I used to do something similar with Jinja2 templates and ansible. This combo was really good for augmenting bash scripts. Also, the fact you can create ansible modules in python is perfect for when you need more flexibility for data transformation without resourcing to obscure j2 incantations.


Wrote something similar, but while you can mix generated and hand-written code, the actual python code for that is moved out of the file.

https://jwdevantier.github.io/gcgen/


this looks nice, but I feel like mixing the python into the source is actually the reason I like cog so much. It feels weird at first, but having it all inline and checking it in together is simpler to work with IMO


I used Spyce for a while years ago https://spyce.sourceforge.net/ it stopped being maintained and I abandoned it but it had a number of ideas which I think work well. It changed the Python so semicolons and square brackets were used instead of indentation, one could also use straight Python. It generated HTML but I'm sure could be adapted easily.


I'm not sure if I'm missing anything, but if you ever need something like this, it generally means that you have a serious issue in your design.

I can understand templating languages (e.g. jinja2) which add dynamic logic using a high level language (e.g. Python) to a dumb markdown language (e.g. HTML), but I don't see the need to generate code for another high level language.


It's mainly useful for automating chunks of documentation.

Markdown files on GitHub for example don't support any form of templating language - but Cog is a template language which can be hidden away in <!-- comments --> in your Markdown, and then executed using a separate command:

    cog -r README.md
So it's a templating language for files that don't usually support templating languages, but that do support comments.


A metaprogramming technique like source-code generation is not a serious issue in any design. It's especially helpful if the source of truth in your design is external to the current code base. You can generate:

- documentation from source-code files, some CLI frameworks are offering this (eg. oclif[0])

- types from schema files, and schemas from types (eg. quicktype[1])

- API clients and servers from specifications (eg. openapi-generator[2])

...these are all repetitive tasks that can be automated.

That being said, I wouldn't personally generate it in the build step, but rather introduce an additional generate step for it and sync the generated output with the VCS of choice. I recently wrote about it a bit here[3].

[0]: https://github.com/oclif/oclif#oclif-readme

[1]: https://app.quicktype.io/

[2]: https://github.com/OpenAPITools/openapi-generator

[3]: https://blog.whilenot.dev/posts/typescript-source-code-gener...


Exactly. I've used something like this before when writing a lexer and a parser and it was really helpful.


Before the introduction of generics quite a few projects used source code generation in Go projects, to make them manageable.

(Lisps, OTOH, have macro capabilities that do the same thing elegantly behind the scenes.)


It is not for dynamic logic, it is more like a code generation tool. The inlined output means that you don't have to rerun the embedded code every time, which is useful for many cases where templating as a part of the build process would be inadaquate or cumbersome.


I would often prefer to have this kind of code generation to magic reflection or dynamism at runtime, especially if runtime magicks come with a performance cost. For example in JavaScript you can intercept any method call or object access at runtime using Proxy, or in Ruby using method_missing, but it such code is often hard to maintain and understand.

If you statically generate a bunch of methods from a schema, it’s very easy to just read the output source, and the JIT is much happier with the code.


Sometimes it's easier to just codegen than to figure out the performant way to do something like read some spec file and generate an API from it. Especially if the generation target is something like Java or C++.

Code generation is also nice for things like generating Typescript definitions for your frontend to interact with a backend API. The value is partly derived from Python being a language with great introspection tools, so the codegen code itself is easy.

It's not so much that this is the principled way of doing things, but it might be a way that is good enough and gets you where you need to be, in a slightly reproducible fashion.


then you have not gone high enough in abstraction/interoperability within some code food-chain .

i had python code that generated another python code which then generated C (and C++) for wrapping CORBA.. 20 years ago.

Like 1000 LOC generated 5k that generated about 100K C.. No way to make that by hand - and keep it consistent.. Unless, one has to employ large number of people for the sake of it..


Can you see the need for it when the high level language is overly verbose or requires boilerplate? Or simply very limited in terms of generics etc? Or maybe you're mixing languages and you want to generate interfaces between them.

We use something like this (our own version, not in Python) for the above cases. We generate VHDL and C. It's incredibly powerful.


There’s definitely use cases for example API client generation.


The most common use I can think of is to align your classes to the schema of your database. Entity Framework uses that (the C# equivalent is called T4 or TTTT).


Or one’s been programming in C++ where they use an ungodly amount of preprocessor and templates.. cog solves that elegantly.


At least with preprocessor and templates the compiler errors point to your actual source files, not to some auto-generated mess.


Cog generates inline code, the compiler will still point to the code unlike the preprocessor.


I used m4 to that effect to generate some set-like and map-like strongly-typed classes for a j2me project. Must have been 15 years ago.

For C and C++ in particular, X macros are another technique that can be used to generate boilerplate nicely. I use that quite a bit to this day, even in C++, where it's not needed as much due to templates.


Interesting. I know IDE snippets are not the same, but I wish that they can do more than token replacement and have features that enable us to do the same.

Neovim has few snippet plugins that are powerful enough(after setup) to do similar code-gen but I think LSP servers might be more suitable to do this(based on a dsl may be) in an IDE agnostic way.


Wow. Cog is almost 20 years now. Still cool today. Perhaps we should try to put GPT prompt into the Cog comment these days.


I wrote a thing like this once, only it would find any matching multiline regex, send it to any external command you wanted, and substitute the result. That way you could replace any text with any kind of processing program depending on your needs. Too bad I wrote it at work, was probably ditched when I left.


I built a feature like that which works using the Python AST recently:

https://github.com/simonw/symbex/blob/main/README.md#replaci...


Was it only a twmporary fix? Then no, nothing is more permanent than a twmporary fix.


Feels like one of those 'neat on paper, bad in practice' kinds of ideas. I can imagine it being pretty bad for code readability if not used judiciously, and it adds an extra layer of complexity and troubleshooting when things go wrong.

Maybe I'm just a purist.


Have you read the other comments that show examples like inserting help output into your docs?


Something in the article I don't understand:

> Cog lets you use the full power of Python for text generation, without a templating system dumbing down your tools for you.

I would have said that Cog is a templating engine. What distinction is the author drawing here?


Cog seem to be the type that just evals normal python.code. While many other template-engines replicate code-structures. For example, in Jinja2 you would write something like "{% for i in data %}{{i}}{% endfor %}", which looks kinda like python, and utilizes python-object, but is not real python-code.


Good code at the expense of "syntactic noise" in templates: a suitable tradeoff for the typical use case of short and precise templates with advanced logic backing them (e.g. the DCT coefficient tables linked in another comment) rather than large outputs with simple logic to keep the templates readable (e.g. a dynamic web page, the typical use of Jinja).


I think the difference is that some templating engines have a DSL that lets you do things like loop over arrays and such, and that compared to those Cog is different.


Many if not most templating engines do not allow an arbitrary code because of security implications.


I use shell in the same way https://mkws.sh/docs for static site generation!


I think people really just want a better metaprogramming system


So it's a pre-processor? Any reason they don't call it that?


How is this different then literally any other templating language?


See comment here: https://news.ycombinator.com/item?id=38623182

The clever thing about Cog is that it's a templating language which is designed to be hidden in comments inside files (such as Markdown) that don't usually support an embedded templating language.


Normally code generation via templates is a part of the build process, but this is not always ideal for all uses. For example some template is simply too slow to run, or needs some specialized dependency which is otherwise not required. Or a particular code can be manually written, but you want to further document how that code could have been automatically generated. Cog templates are intended to persist, so that the output can safely replace the input and be directly checked into the repository without any additional dependency, making them useful for such situations.

(This concept is not that unusual, by the way. CPython itself has a preprocessor called the Argument Clinic [1] which was in fact inspired by Cog.)

[1] https://devguide.python.org/development-tools/clinic/


so basically php, but preserves the code used to generate the template. which is completely fine, good idea, well executed!


I would want this for C, C within C. You can just write C code in a C file that runs at "compile time" to create more C code.

And I want it to be built-in to Prettier, so all this happens when I press Ctrl+S to save the file.


From what I hear, perhaps Zig could be something for you? Of course, that's no use if you just want to keep writing C but perhaps worth a try?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: