Private companies are required to do a good job and balance spending to stay alive. Goverment agencies can do the most horrible job for enormous amounts of money and just keep going.
First of all that isn't evidence of anything and explains how we got in this pickle to begin with... people making shit up.
Secondly, private companies don't have to do a good job. They have to turn a profit, which could be exploitative and not necessarily legal.
Third, government agencies have budgets and audits like many public companies but not private companies. If a government agency exceeds its budget it is essential bankrupt.
Finally, if a government agency fails an audit people can be fined, fired, or even prosecuted. Government agencies are always audited, unlike many commercial companies that are only audited according to regulation and/or cause of investigation. Many of the people recently fired by DOGE are auditors that were preventing the very kind of behavior you suggest of government.
Not to be annoying, but maybe one of the most useful things git does for me outside of the usual SCM stuff, is git-bisect. Its saved me many hours of debugging.
If you ever run into a case where something is broken (that you can measure, like a test or broken build) but it’s not obvious what change caused the fault, first go to a commit where you know the fault is present.
$ git bisect start
$ git bisect bad
Then go to a commit where you know the fault is NOT present. Go back far if you can! This can be in another branch as long as your start and end spots are connected somehow.
$ git checkout abc123
$ git bisect good
And after that bisect command, your HEAD will get moved to the mid point between the good and bad commits. Check if the fault is still there, and run either "git bisect good" or "git bisect bad" depending on if it’s resolved or not.
It will keep jumping you to the mid point between the nearest bad commit and good commit till you end up at 1 precise commit that caused your fault.
This works extremely well for configuration changes specifically, where maybe it doesn’t break in CI, but on your local dev machine it does. You can also use this for non-text files like images to find out when 1 part of an image was changed for example.
—
Also if you just want to make normal SCM stuff easier,
Thanks for sharing your workflow - nice and simple! And sounds like you've got a rhythm down with those core commands, which I know is the case for many Git users.
One of the things I'm trying to explore with these visual and gamified tools is how to help newer Git folks or even users who mostly live in that commit/push/pull flow get a clearer mental model of what's actually happening under the hood.
Git has a really wide breadth of functionality that is kind of interesting on its own merit, but also useful for a plethora of different tasks. For better or worse even Git experts can always find ways to expand their knowledge :)
I'm sorry, but it's been ten years and you haven't learned how to handle other git situation? And then you're dependent on Google? Is there a reason why you haven't sat down to learn some more?
serverless just means that a hosting company routes your domain to one or more servers that hosting company owns and where they put your code on. And that hosting company can spin up more or less servers based on traffic.. TL;DR; Serverless uses many many servers, just none that you own.
More specifically: no instances that you maintain or manage. You don't care which machine your code runs on, or even if all your code is even on the same machine.
Compute availability is lumped into one gigantic pool and all of the concerns below the execution of your code is managed for you.
Interesting article. 16-24 bytes per allocation is pretty big and it sounds like it would a bit too much overhead, an actual demo would be nice.
My approach with Valk is to allocate per type size and in incremental sized blocks. These blocks work alot like tiny bump allocators and can be reset with 1 line of code without losing the objects in it that are still used.
If the overal memory usage increased a certain percent, we trace the stack, clean up the blocks from long existing objects and free blocks that are completely empty.
My benchmark performes 2x faster than rust, and 4x golang. We use 8 bytes per allocation.
Other nice people on this comment thread pointed out that I can bring it down to 8, I had just missed some simple tricks I could do (like overriding the object data with the forwarding pointer since that data is garbage anyway).
So 8 bytes per allocation sounds about right.
How do you deal with variable size types like arrays?
We dont have static arrays for GC types. Creating multiple objects is always 1 by 1 and we would put them in a dynamic array. We do support non-GC structs, which allow you to create static arrays in a single allocation. I havent spent much time on this problem yet. I might figure it out in the future, but it's kinda low-priority for me.
i havent used htmx yet, but it sounds like a breath of fresh air. no npm library with 1000 dependencies with slow build times. just plain simple js library like the good old days ^_^
These kind of tactics work for simple examples. In real world http servers you'll retain memory across requests (caches) and you'll need a way to handle blocking io. That's why most commonly we use GC'd/ownership languages for this + things like goroutines/tokio/etc.. web devs dont want to deal with memory themselfs.
It scales to complex examples as well. Retained memory would be handled with its own allocator: for a large data structure like an LRU cache, one would initialize it with a pointer to the allocator, and use that internally to manage the memory.
Blocking (or rather, non-blocking, which is clearly what you meant) IO is a different story. Zig had an async system, but it had problems and got removed a couple point releases ago. There's libxev[0] for evented programs, from Mitchell Hashimoto. It's not mature yet but it offers a good solution to single-threaded concurrency and non-blocking IO.
I don't think Zig is the best choice for multithreaded programs, however, unless they're carefully engineered to share little to no memory (using message passing, for instance). You'd have to take care of locking and atomic ops manually, and unlike memory bugs, Zig doesn't have a lot of built-in support for catching problems with that.
A language with manual memory allocation isn't going to be the language of choice for writing web servers, for pretty obvious reasons. But for an application like squeezing the best performance out of a resource-constrained environment, the tradeoffs start to make sense.
Off the top of my head, I was wondering... for software like web services, isn't it easier and faster to use a bump allocator per request, and release the whole block at the end of it? Assuming the number of concurrent requests/memory usage is known and you don't expect any massive spike.
I am working on an actor language kernel, and was thinking of adopting the same strategy, i.e. using a very naive bump allocator per actor, with the idea that many actors die pretty quickly so you don't have to pay for the cost of GC most of the time. You can run the GC after a certain threshold of memory usage.
The problem _somebody_ between the hardware and your webapp has to deal with is fragmentation, and it's especially annoying with requests which don't evenly consume RAM. Your OS can map pages around that problem, but it's cheaper to have a contiguous right-sized allocation which you never re-initialize.
Assuming the number of concurrent requests is known and they have bounded memory usage (the latter is application-dependant, the former can be emulated by 503-erroring excess requests, or something trickier if clients handle that poorly), yeah, just splay a bunch of bump allocators evenly throughout RAM, and don't worry about the details. It's not much faster though. The steady state for reset-arenas is that they're all right-sized contiguous bump allocators. Using that strategy, arenas are a negligible contribution to the costs of a 200k QPS/core service.
If you never cache any data. Sure, u can use a bump allocator. Otherwise it gets tricky. I havent worked with actors really, but from the looks of it, it seems like they would create alot of bottlenecks compared to coroutines. And it would probably throw all your bump allocator performance benefits out the window. As for the GC thing. You cant 'just' call a GC.. Either you use a bump allocator or you use a GC. Your GC cant steal objects from your bump allocator. It can copy it... but then the reference changes and that's a big problem.
I think this comment assumes that you're using one allocator, but it's probably normal in Zig to use one allocator for your caches, and another allocator for your per-request state, with one instance of the latter sort of allocator for each execution context that handles requests (probably coroutines). So you can just have both, and the stuff that can go in the bump allocator does, and concurrent requests don't step on each others toes.
Have you looked at how Erlang does memory management within its processes? You definitely can "get away" with a lot of things when you have actors you can reasonably expect will be small scale, if you are absolutely sure their data dies with them.
The trick to Erlang's memory management is that data is immutable and never shared, so all the complication and contention around GC and atomic locks just disappear.
The key thing (as I understand it) is that each process naturally has a relatively small private set of data, so Erlang can use a stop-the-process semispace copying collection strategy and it's fast enough to work out fine.
Since nothing can be writing to it during that anyway, I'm not sure the language level immutability makes a lot of difference to GC itself.
This example came from a real world http server. Admittedly, Zig's "web dev" community is small, but we're trying :) I'm sure a lot could be improved in httpz, but it's filling a gap.
You can use these patterns for per-request resources that persist across some I/O calls using async if you are on an old version of Zig or using zigcoro while you wait for the feature to return to the language. zigcoro's API is designed to make the eventual transition back to language-level async easy.
If you change the language, you have no ecosystem. You cant say, it has a big ecosystem "if everyone ports their code". By that logic all languages have a big ecosystem. Anyhow. JS has much unexpected strange behaviour, i really would not recommend such a language in 2024.
I stopped reading news in the last 3 years and i havent had one "damn, i didnt know about that" moment. News is truelly unimportant.
As for "nowadays there's no difference between those 2 things". I think the "nowadays" part is wrong. I think people have been like this since the beginning of mankind. Just look at medieval witchhunts or ww2 germany.
However, i think the it's an interesting story. It kinda revolves around the philosophy about what is "true". I cant answer that though, but it's interesting to think about.