Hacker News new | past | comments | ask | show | jobs | submit login
Creative Code Management (bit-101.com)
62 points by ingve 51 days ago | hide | past | favorite | 19 comments



With canvas-sketch[1] you hit a keystroke when you see a generation you like, and it runs git commit, captures a print resolution output, and exports it as a file tagged with git hash and optional PRNG seed suffix. This way you can reproduce exactly a generative artwork from years ago, without the hassle of manual git/shell commands, and with any file/image viewer to browse and find these outputs at a later point.

[1] - https://github.com/mattdesl/canvas-sketch/blob/master/docs/e...


That's quite the same as adding the git hash into a docker image meta data. You can always checkout this exact version and you can easily diff with the current version.


Hi Matt, nice seeing you here. I attended your talk for the MIT Media Labs, I enjoyed it. (:


I did not know what creative coding was and would have appreciated some context-setting in the blog post.

If anyone else is unclear on what creative coding is: https://en.wikipedia.org/wiki/Creative_coding

Whenever I have done anything similar to creative coding (e.g. art generation with neural networks) there are three kinds of information that I need to store - code, parameters, and results.

Results could be anything from utility function values to S3 prefixes under which generated images were stored. They aren't always suitable to commit to git - do you really want to commit large, generated images/videos to your git repo?

Git commits are a great way to refer to different points of the code base, but I like to have parameters and (corresponding results) stored somewhat orthogonally to git commits. Because the same commit of the code base might apply to multiple different runs.

The solution I've converged on is to commit a CSV file alongside the code with the following structure for its rows: <git_commit>,<parameter_1>,...,<parameter_m>,<result_1>,...,<result_n>

The only tricky part is running "CSV migrations" when you add new parameters. But if your rerun script only uses parameter names (specified in first row of CSV), this isn't a real problem.


Maybe it makes sense to integrate the editor’s auto-save with git, so that every file system save call is paired with a git commit (referenced by time stamp). Whenever the user manually snapshots a commit with a message/tag (or after some period), the intervening commits could be compacted. This also has the advantage of enabling a seamless undo experience between the small changes (using something like tree undo) and big changes (using the git commit tree).


Honestly I'd really like something like this even for regular application development. I think it would be a good idea though to keep a backup branch for compactions in case you want the previous history right after compacting.


That was similar to my thought. In Vim, I have persistent undo and Undotree set up so that I can go back to the state of a file at any point in my process, so long as it hasn't been changed by any other program. I wonder if there's a way to tie that and git together? Maybe checking in the undo file?


A definitely less robust, but more improvisational approach I go for is using the vim undo tree [0] and making sure to set an undodir and undofile so that the history persists. The Gundo plugin [1] gets an honorable mention.

[0] https://vim.help/32-the-undo-tree

[1] https://docs.stevelosh.com/gundo.vim/


Clear writing, nice tight shell script to streamline git tags on the command line, I can use this.


This shell script presents a number of relatively serious safety issues; e.g.,

* no quoting of the arguments

* no error handling; if any of the git commands fails, the script will drive on and probably do the wrong thing; should at least use the "errexit" option


That's an interesting approach. When I care to save every iteration of something I'm working on, I usually just put this functionality into the render script itself. For example something I was working on last year produced renders that were around 20 or 30 minutes long (I primarily work with audio) and I decided I wanted to save every permutation of the script as I worked on it, so I just added a few lines at the bottom to copy the script next to the render. A few hundred lines of python isn't a big deal to save when I'm already saving hours of audio renders.

I don't usually do this though... after 15+ years of doing generative audio work saving every iteration feels a little like hoarding.


I agree, saving every iteration is overkill. I've done it on projects before too and it's too much. I just want an easy way to say, "I like this and I might want to go back to it later"


I have this problem too. Also another aspect of creative coding which often pops up is how binary/static assets get managed in VC. I've resolved that setting the .gitattributes so that mp3/wav/wv will be treated as binary so the diffing doesn't get wild but its still suboptimal.


One thing you can also try is keeping the number of versions stored in git small by using rebase, particularly for WIP assets.


I'm not at all knowledgeable of how rebase works. Can you elaborate on this approach so I can research it more in depth for myself?


So if you do an interactive rebase, it's pretty easy to review the past few commits and either keep each commit or squash it in with the previous one.


Dumb question: doesn't Git save the entire binary rather than just the changed parts? Is there a better VCS for binaries?


Generally, you'd ignore the binary or any compilation artifacts in your .gitignore. You might also want to ignore the images/animations you create and save them elsewhere. I think there's another comment here that talks about strategies for checking in binary files.


Wow, I didn’t even know to use ‘git tag’. Have to go change my commit hook now




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: