Google App Engine originally used a custom containment strategy.
Back in the day, I was on the team that added Python 2.7 support to App Engine and we were experimenting with a different containment approach.
But Python is a complex language to support - you need to support WSGI, to support dynamic loading (for C extensions), a reasonably performant file system (Python calls `stat` about a billion times before actually importing a file), etc.
So our original runtime was actually Brainf#ck. So, at once point, if you had guessed that Google supported it, you could have written your (simple) webapp in Brainf#ck and Google would have scaled it up to hundreds of machines if needed ;-)
Aw darn! Around that time, I was at Google, and there was a thread on eng-misc (or possibly eng-misc-mtv; it's been lost to the sands of time) with people contributing programs that would flip a virtual coin 50 times and output the sequence of heads and tails. I contributed one in BF because I had some spare time waiting for a deploy job, and I've been joking since that I've "written BF code professionally".
And now you tell me that that could have been for real. :-(
I was also really confused by this comment until I re-read it later.
At first I thought OP meant “we wrote out container runtime in Brainfuck,” but what I think they meant was “we first deployed Brainfuck support in order to test our new runtime on something simpler than Python.”
great news! the business is delighted by the scalability and resilience demonstrated by this proof-of-concept and alignment with cloud-native strategy. the business has decided to use this as the platform to deliver the new "hello $customer_firstname $customer_lastname" product to production.
since we'll be executing brainfuck programs containing customer PII, the design may need a few standard technical enhancements to implement an acceptable level of controls to reduce the risk of customer data breaches.
each edge in the design must use transport layer security (either using built-in service capability or, preferably, by addition of a service mesh to the architecture).
use of plaintext database passwords in deployment configuration is unacceptable. fresh secrets need to be generated and the deployment configuration needs to be modified to read secrets from our existing secret management platform. please follow least privilege principles and define minimal roles & access to secrets for each service in the cluster.
since the brainfuck machine will be processing workloads for multiple customers, in addition to coarse-grain service-level access control it will also need to enforce finer-grain access controls to restrict which parties can read from or write to which ranges of brainfuck memory cells containing information of distinct customers. this must follow our established standard by adding authentication and authorisation to each service based on stateless access tokens. please update the design to integrate with the existing token issuer.
I cannot name my supplier (very well known company) that is pushing something like this and I am already the black sheep in this area of business for being against it. As our top directors already bought the dream, we will get it.
This is gold. It reflects too many real engineering projects so well. Sometimes I'm the guy to point this out, but too often I catch myself arguing for adding unnecessary complexity (unfortunately usually in hindsight).
There should always be one engineer playing devil's advocate and at least try arguing why a new system/service isn't necessary.
I came onto a team to support such an overengineered project. On the plus side it keeps me busy, and teaches me new skills. On the negative side my job is to keep the plates spinning on a rube goldberg machine.
Our Brainfuck cluster is invaluable to our research at Synergistic Associates. This is an endorsement from our umbrella of companies, since I own them all and have placed my Amazon Echo robots on the boards (after I downloaded my AI written in Brainfuck into each of them!).
Our support incidents dropped to ZERO, when we switched to using Brainfuck on the back end. Prior to Brainfuck, we had used Rust, Go, Python, and even Bash on the backend to implement our proprietary secret sauce.
Truth be told, we actually rewrote our entire backend infrastructure in C from that earlier sticky mess of glue languages holding together the object oriented stuff. C gave us a lot of flexibility, but I started to receive mountains of sales calls and emails from people wanting to sell me security products and internal trainings for how to write secure software. That was sort of okay, but the calls about my expired warranty were just too much to handle.
Somehow, those security research companies knew we had started using C!
Well... Let me tell you, once I switched over to a few one-liners of Brainfuck on the backend, I deleted all the other servers and software. We no longer needed them. Brainfuck really has solved all our problems, IaaS is no longer needed, SaaS is a moot point, DaaS is gone since my users now don't even get remote CLIs, etc. Everything is good for us with Brainfuck!
Oh yea - the money is great, since I fired everyone else. Only me and my army of AI workers written in Brainfuck, downloaded to Amazon Echo devices, remain.
That's not enough, you should include a cache layer which key is the neighbor memory state of length 2k (give or take) and the instruction pointer. You can then simulate at most k instructions at once when hit, a steady improvement over non-cached verseion!
I think this can be legitimately called a Clusterf#ck.