Hacker News new | past | comments | ask | show | jobs | submit | ngrilly's comments login

Does it support zero downtime deploys?

Why not? Install trafeik or any other load balancer, setup two services, and restart one after one.

https://kamal-deploy.org/docs/configuration/proxy/

I think GP's point was that Kamal has all of these things already, so you don't have to set them up.


Precisely. I've been implementing some kind of blue-green deployment with both systemd and dockerd, but it was an imperfect and incomplete solution. Kamal put much more effort into it and it seems more convenient and reliable (but I haven't tried it yet in production).

Ah yes my favourite thing to have to do, rolling my own deploys and rollbacks.

It’s stuff like this that’s just a thousand papercuts that dissuades me from using these “simpler” tools. By the time you’ve rebuilt by hand what you need, you’ve just created a worse version of the “more complex” solution.

I get it if your workload is so simple ir low requirement that zero-downtime deploys, rollbacks, health/liveness, automatic volumes, monitoring etc are features you don’t want or need, but “it’s just as good, just DIY all the things” doesn’t make it a viable alternative in my mind.


Sure but Kumal getting all those features means it strays close to Kubernetes in complexity and it quickly because "Why not Kubernetes? At least that is massive popular with a ton of support."

Kamal is doing most of this, but on a single node. This is the limitation that differentiates it from k8s, but also makes it much simpler.

I disagree. An opinionated tool can be as powerful as, but much simpler than a generic tool.

I'm curious to hear why most practitioners would call Zig's allocators memory safe? Do you mean the std.heap.GeneralPurposeAllocator which protects against use-after-free when building in debug and release_safe mode (not release_fast)?

I must say that I really appreciate your patience in addressing these comments. If the possibility of race conditions leading to UB and the lack of sum types in Go are so bad for security, then it shouldn't be difficult to observe exploitable vulnerabilities in real-world Go code bases.

In a specific scenario, I have made use of interface values, and type switches as a form of "tagged union" / "sum type".

All it requires is that the module defining the interface, and structs implementing it, are in the same package, and the interface requires a private method.

I used that to have a message type, passing instances of it over a channel of that interface, and demuxing based on a type switch of the message.

One could use a similar scheme for return values from functions, just that for the simple error / no-error case it would not be idiomatic, however that should not prevent one from doing so if desired.


Yes, that’s a possible pattern to emulate a sum type in Go.

Agreed. Zig has all the advantages of Go's explicit errors as values without the drawbacks.

I really like Zig, and I’ve been thinking about using it for developing servers offering APIs over HTTP, using arenas bound to request lifetime. I think I would be comfortable developing like this myself. But all the devs in my org have only ever used managed-memory languages such as Java, C#, Python or JavaScript, which makes me hesitant, as I’m wondering about the learning curve, and of course the risk of use-after-free. Not something I would do anyway before Zig reaches 1.0.

I wrote substantial amounts of C, and Pascal/Delphi before that, before learning Zig, so you and I wouldn't see the same learning curve. That said, I found it straightforward to take up. Andrew Kelly places a great emphasis on simplicity in the sense Rich Hickey uses the term, so Zig has a small collection of complete solutions which compose well.

Now is a great time to pick up the language, but I would say that production is not the right place to do that for a programmer learning memory management for the first time. Right now we're late in the release cycle, so I'd download a nightly rather than use 0.13, if you wanted to try it out. Advent of Code is coming up, so that's an option.

Using a memory-managed language means you need to design a memory policy for the code. Zig's GeneralPurposeAllocator will catch use after free and double free in debug mode, but that can only create confidence in memory handling code if and when you can be sure that there aren't latent bugs waiting to trigger in production.

Arenas help with that a lot, because they reduce N allocations and frees to 1, for any given set of allocations. But one still has to make sure that the lifetime of allocations within the arena doesn't outlast the round, and you can only get that by design in Zig, lifetimes and ownership aren't part of the type system like they are in Rust. In practice, or I should say with practice, this is readily achievable.

At current levels of language maturity, small teams of experienced Zig developers can and do put servers into production with good results. But it's probably not time for larger teams to learn as they go and try the same thing.


I started programming in Pascal, C and C++, so personally I’m fine with manual memory management, especially with a language like Zig. I actually find it quite refreshing. I’m just wondering if it’s possible to “scale” this approach to a team of developers who may not have that past experience (having only worked with GCed languages) without ending in a code base littered with use-after-free errors.

There was no backlash against Java, Swift, Kotlin and TypeScript because they are all GCed languages. You don’t need to learn anything to use them, and it’s not constraining much how you write your code. Rust is very different in that regard. You need to learn about memory ownership and borrowing, and it requires an effort and also constrains your design. I think that explains the (limited but real) backlash.


I'll admit that there are certain patterns/data structures/etc. that are awkward to implement in Rust, but to write good C++ you are essentially required to think about ownership and lifetimes to some degree anyway (despite the greater lenience awarded compared to Rust and the lack of static checking). For anyone familiar with modern C++, I don't think that the shift in mental model should be too huge so the degree of backlash surprises me somewhat.


When Swift first came out there was a subset of obj-c developers who hated Swift’s take on Optional. In obj-c you can freely send messages to nil and it just returns nil. In Swift you have to explicitly mark if a value is optional and handle it being nil if so, and even though this often means adding just a single character (foo?.bar), there were people who thought that Swift was just making things more complicated for no benefit.


Happily slowly replacing C and C++ on GUI development and distributed computing systems for the last 20 years.

And yes there has been enough anti-GC hate and FUD from those that think all automatic resource management systems are born alike.


You’re right, I forgot about the skepticism GCed languages initially faced.


I don’t remember this.


Exactly! Stacked diffs/changes are the way, with one commit per PR. https://www.stacking.dev/


> The write api is sync, but it has a hidden async await: when you do your next output with a response, if the write fails the runtime will replace the response with a http failure. This allows the runtime to auto-batch writes and optimistically assume they will succeed, without the user explicitly handling the errors or awaits.

It reminds me of PostgreSQL's commit_delay, even thought it's not exactly the same principle: https://www.postgresql.org/docs/current/runtime-config-wal.h...

Litestream, mentioned in the post, is also suggesting a similar technique.


You can scale quite far using SQLite. That's what Basecamp is doing with their new self-hosted chat app, named ONCE Campfire. It is designed to scale to hundreds or even thousands of concurrent users with the right hardware: https://once.com/campfire.


I wonder why it needs 2Gb of RAM even for a low number of users though.


It ships as a Docker container. Docker recommends a minimum of 2GB RAM to run the Linux version of the Docker Engine, before adding constraints imposed by running apps.


Ruby on Rails is not known for being very RAM efficient, but this is only me speculating.


awesome, thank you for the information.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: