Hacker Newsnew | past | comments | ask | show | jobs | submit | clx75's commentslogin

1. Langsam - https://github.com/cellux/langsam

This is an AST-walking interpreter for my personal LISP dialect written in C. Once it's ready, I would use it to implement a low-level, statically typed language (Schnell) as a Langsam library. The goal is to gain the ability to JIT-compile Schnell code (sexps of a statically typed language) from Langsam. Once this works, I would rewrite Langsam in Schnell so that it becomes a fast bytecode interpreter. With the faster Langsam (and the Schnell built into it) I could build a little OS called "Oben". The OS would first run on top of Linux, then I would attempt to bootstrap the entire stack on bare-metal. I already have a Forth dialect implemented in assembly language (Grund/Boden). The idea is to implement Langsam in Grund and then bootstrap the entire Grund -> Langsam -> Schnell -> Oben chain on something like the qemu q35, later on a Raspberry Pi Zero 2W and maybe even my own hardware (ie. an FPGA board like what Wirth et al. created for Project Oberon).

2. MTrak - https://github.com/cellux/mtrak

This is a TUI MIDI tracker written in Go. Not too user-friendly: one has to enter raw MIDI messages in hex into the tracks. Can be connected to synths like Fluidsynth or Surge XT via JACK MIDI. Unfortunately it takes a lot of CPU time, probably due to the use of BubbleTea (and no time spent on optimization).

3. Mixtape - https://github.com/cellux/mixtape

Beginnings of a programmable, non-realtime audio sample generator/manipulator written in Go with an OpenGL GUI. I was thinking about how people in the old times cut up the magnetic tape which contained the sound bites and rearranged them to build something new. What if I'd implement a data type called "tape" which is basically a piece of sound and then provide operators in a Forth-like language to create and manipulate such tapes. Each tape could be a sound and then these could be stitched together to form songs. Who knows maybe an entire song could be represented as a hierarchy of these tapes. Each sound or song section could be its own file (*.tape), these could be loaded from each other, maybe even caching the WAV generated from the code of a tape to speed things up when there is a huge hierarchy of tapes in a project. Lots of interesting ideas are brewing in this one.


I am fascinated by the idea of building something like the Lisp Machines or Smalltalk 80 from scratch. Build a Forth in assembly, build a Lisp in Forth, build an OS and computing environment in Lisp. AOT-compile only the Forth interpreter, load and compile the rest from source during system boot, maybe with later stages optimizing the previous stages as the system is assembling itself.

I imagine two languages - Langsam and Schnell - intertwined in some sort of yin-yang fashion. Langsam is slow, dynamic, interpreted, Schnell is fast, static, compiled. Both would be LISPs. Schnell would be implemented as a library in Langsam. If you said (define (add x y) (+ x y)) in Langsam, you would get a Langsam function. If you said (s:define (add (x int) (y int)) (+ x y)) in Langsam, you would get a Langsam function which is a wrapper over a JIT-compiled Schnell function. If you invoke it, the wrapper takes care of the FFI, execution happens at C speed. Most of the complexity typical of a low-level compiled language could be moved into Langsam. I could have sophisticated type systems and C++ template like code generation implemented in a comfortable high level language.

This latter part I managed to partially implement in Clojure and it works (via LLVM), it would be just too much effort to get it completed.


> Build a Forth in assembly, build a Lisp in Forth

You might already know it, but Dusk OS[1], which is a Forth, has a Lisp implementation[2] which includes a native code compiler for i386, amd64, arm, risc-v and m68k. You might consider it a good starting point for your project.

[1]: http://duskos.org/

[2]: https://git.sr.ht/~vdupras/duskos/tree/master/item/fs/doc/co...



I don’t like the language names but I get why you chose them.


At work we are using Metacontroller to implement our "operators". Quoted because these are not real operators but rather Metacontroller plugins, written in Python. All the watch and update logic - plus the resource caching - is outsourced to Metacontroller (which is written in Go). We define - via its CompositeController or DecoratorController CRDs - what kind of resources it should watch and which web service it should call into when it detects a change. The web service speaks plain HTTP (or HTTPS if you want).

In case of a CompositeController, the web service gets the created/updated/deleted parent resource and any already existing child resources (initially none). The web service then analyzes the parent and existing children, then responds with the list of child resources whose existence and state Metacontroller should ensure in the cluster. If something is left out from the response compared to a previous response, it is deleted.

Things we implemented using this pattern:

- Project: declarative description of a company project, child resources include a namespace, service account, IAM role, SMB/S3/FSX PVs and PVCs generated for project volumes (defined under spec.volumes in the Project CR), ingresses for a set of standard apps

- Job: high-level description of a DAG of containers, the web service works as a compiler which translates this high-level description into an Argo Workflow (this will be the child)

- Container: defines a dev container, expands into a pod running an sshd and a Contour HTTPProxy (TCP proxy) which forwards TLS-wrapped SSH traffic to the sshd service

- KeycloakClient: here the web service is not pure - it talks to the Keycloak Admin REST API and creates/updates a client in Keycloak whose parameters are given by the CRD spec

So far this works pretty well and makes writing controllers a breeze - at least compared to the standard kubebuilder approach.

https://metacontroller.github.io/metacontroller/intro.html


As other sibling comments suggest these use cases are better solved with a generator.

The rendered manifest pattern is a simpler alternative. Holos [1] is an implementation of the pattern using well typed CUE to wrap Helm and Kustomize in one unified solution.

It too supports Projects, they’re completely defined by the end user and result in the underlying resource configurations being fully rendered and version controlled. This allows for nice diffs for example, something difficult to achieve with plain ArgoCD and Helm.

[1]: https://holos.run/docs/overview/


The rendered manifests pattern is a great read by itself: https://akuity.io/blog/the-rendered-manifests-pattern


Curious why using controller for these aspects versus generating the K8s objects as part of your deployment pipeline that you just apply? The latter gives you versioned artifacts you can roll forward and back and independent deployment of these supporting pieces with each app.

Is there runtime dynamism that you need the control loop to handle beyond what the built-in primitives can handle?


Some of the resources are short-lived, including jobs and dev containers. The corresponding CRs are created/updated/deleted directly in the cluster by the project users through a REST API. For these, expansion of the CR into child resources must happen dynamically.

Other CRs are realized through imperative commands executed against a REST API. Prime example is KeycloakRealm and KeycloakClient which translate into API calls to Keycloak, or FSXFileSystem which needs Boto3 to talk to AWS (at least for now, until FSXFileSystem is also implemented in ACK).

For long-lived resources up-front (compile time?) expansion would be possible, we just don't know where to put the expansion code. Currently long-lived resource CRs are stored in Git, deployment is handled with Flux. When projects want an extra resource, we just commit it to Git under their project-resources folder. I guess we could somehow add an extra step here - running a script? - which would do the expansion and store the children in Git before merging desired state into the nonprod/prod branches, I'm just not clear on how to do this in a way that feels nice.

Currently the entire stack can be run on a developer's laptop, thanks to the magic of Tilt. In local dev it comes really handy that you can just change a CRs and the children are synced immediately.

Drawbacks we identified so far:

If we change the expansion logic, child resources of existing parents are (eventually) regenerated using the new logic. This can be a bad thing - for example jobs (which expand into Argo Workflows) should not change while they are running. Currently the only idea we have to mitigate this problem is storing the initial expansion into a ConfigMap and returning the original expansion from this "expansion cache" if it exists at later syncs.

Sometimes the Metacontroller plugin cannot be a pure function and executing the side effects introduces latency into the sync. This didn't cause any problems so far but maybe will as it goes against the Metacontroller design expressed in the docs.

Python is a memory hog, our biggest controllers can take ~200M.


We've used an artifact store like Aritifactory to store the generated / expanded K8s yaml files, ending up with three pieces: 1) A versioned and packaged config generation system that your dev ops team owns. You'd have test and production versions of this that all applications use in their CI pipeline. 2) A templated input configuration that describes the unique bits per service (this configuration file is owned by each application team) 3) The output of #1 applied to #2, versioned in an artifact store that is generated by the CI pipeline.

And finally, a Kustomize step can be added at the end to support configuration that isn't supported by #1 and #2, without requiring teams to generate all the K8s config pieces by hand.


The choice is always between a controller and a generator.

The advantage of a controller is that it can react to external conditions, for example nodes/pods failing, etc. The is great for e.g. a database where you need to failover and update endpointslices. The advantage of a generator is that it can be tested easier, it can be dry-runned, and it is much simpler.

All of your examples seem to me like use cases that would be better implemented with a generator (e.g. Helm, or any custom script outputting YAML) than a controller. Any reason you wrote these as controllers anyway?


I've seen different aproaches to controllers, some times it should have been a generator instead, but the problem with generators is that they don't allow (in the same sense) for abstractions at the same level of controllers.

E.g. at one company I worked, they made a manifest to deploy apps that, in v1 was very close to Deployment. It felt owerkill. As they iterated, suddenly you got ACLs that changed NetworkPolicy in Calico (yes can be done with generator), then they added Istio manifests, then they added App authroizations for EntraID - Which again provisioned EntraID client and injected certificate into pods. All I did was add: this app, in this namespace, can talk to me and I got all this for "free". They code in the open so some of the documentation is here: https://docs.nais.io/explanations/nais/

One day, they decided to change from Istio to LinkerD. We users changed nothing. The point is, the controller was 2 things: 1: for us users to have a golden path and 2: for the plattform team themselves to have an abstraction over some features of kube. Although I do see that it might be easy to make poor abstractions as well, e.g. just because you don't create a Deployment (its done for you), you still have to own that Deployment and all other kube constructs.

I'm currently in a org that does not have this and I keep missing it every, every day.


Even if a controller is necessary, wouldn't you still want to have a generator for the easy stuff?

Kinda like "functional core, imperative shell"?


At work we are using nolar/kopf for writing controllers that provisions/manages our kubernetes clusters. This also includes managing any infrastructure related apps that we deploy on them.

We were using whitebox controller at the start, which is also like metacontroller that runs your scripts on kubernetes events. That was easy to write. However not having full control on the lifecycle of the controller code gets in the way time to time.

Considering you are also writing Python did you review kopf before deciding on metacontroller?


Yes, we started with Kopf.

As we understood it, Kopf lets you build an entire operator in Python, with the watch/update/cache/expansion logic all implemented in Python. But the first operator we wrote in it just didn't feel right. We had to talk to the K8S API from Python to do all the expansions. It was too complex. We also had aesthetic issues with the Kopf API.

Metacontroller gave us a small, Go binary which takes care of all the complex parts (watch/update/cache). Having to write only the expansion part in Python felt like a great simplification - especially now that we have Pydantic.


Based on my personal experience with psychedelics I used to think that these medicines could be a tool to wake people up. Actually, after seeing what else has been tried and failed, I came to the conclusion that psychedelics are the _only_ reliable tool in this arena because of their power. I sincerely thought that psychedelics could make people reconnect to the reality of how things really are. That if they wake up to reality, they would 1.) freak out about our situation and then have an incentive to restrain from doing the "bad" things and 2.) rediscover this deep connection with Nature which is the only viable basis for a harmonious relationship with it.

Unfortunately during the decades I lost my faith even in this. Apparently to have this kind of experience on a psychedelic it's not enough to just dose someone. Of 1000 people taking psychedelics, maybe 1 gets such an experience. And although the scientists at Johns Hopkins try to figure out how to increase that percentage, it's still not enough.

Nowadays I tend to think that all of this is a God's dream and this God for some reason does not want to wake up too soon from its dream.


Or if laws were executable descriptions of how to make certain changes to world state.

There could be some sort of VM (state machine?) which executes these laws (programs). A queue accepts change requests from the edges, e.g. citizen1 wants to sell her car to citizen2 would generate a change request. The queue processor validates the change request, finds the appropriate reducer (= law) for it, applies the reducer to current world state + change request and derive the new world state.

The programs (laws) themselves could also be objects of the world state. Changing the law is then another change request (self-modifying code).

One big issue with this is that the reducers should probably also work at the edges. Paying with a credit card in a shop should work even if the Internet is down. Diverging state histories should be the norm not an exception. We should figure out how to ensure we can always resolve the merge conflicts.


Bitcoin.


I'm really fond of the idea of writing music like this.

From all available implementations of the idea, I probably like Extempore (https://github.com/digego/extempore) the most. Extempore provides a low-level C-like language (xtlang) which compiles into LLVM and can be meta-programmed from a variant of Scheme (TinyScheme I believe). This arrangement makes it possible to generate the code for the audio graph from Scheme, compile/optimize it via LLVM, then drive it in a live-coding fashion from Emacs. Best of both worlds (high and low).

My personal, much simpler attempt in this space is Cowbells (https://github.com/omkamra/cowbells) - with this one you can live-code FluidSynth (MIDI soundfonts) from Clojure + CIDER + Emacs, representing musical phrases either via Clojure data structures or an alternative text-based syntax (which is translated into the former by a compiler).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: