Forgejo Actions is what Zig has migrated to. It's very similar to GitHub Actions; the downside of that is that you inherit questionable design choices, but the big upside is that migration is super easy. While they don't target 1:1 compatibility, things are similar enough that you basically only need to tweak workflow files very slightly. Our experience so far is that it fixes most of our serious problems with GitHub Actions; in particular, their runner software is significantly easier to deploy and configure, has much better target support (GitHub's runner is essentially impossible to use outside of x86_64/aarch64 linux/windows/macos; we tried to patch it to support riscv64-linux and got stuck on some nonsensical problems on GitHub's side!), and actually accepts contributions & responds to issues. My issues with the GitHub Actions' backend & web interface (of which I have many) are pretty much all gone, too, with no new issues taking their place.
I don't like Azure DevOps or it's pipelines (the yaml ones, not the classic drag and drop that is now disabled by default), but I like it a lot more than github actions. I doubt we'd ever really use github actions for security reasons, but I do prefer the explicit behaviors and templating with structure with azure compared to how Github Actions tries to solve change management with digitalisation. I can totally see why github actions would make more sense if you don't have enterprise organisation type AD/Entra + Azure though.
Wasn't aware that there was noticeably higher latency between availability zones in the same AWS region. Kinda thought the whole point was to run replicas of your application in multiple to achieve higher availability.
They also charge you like 1c/GB for traffic egress between the zones. To top it off there are issues with AWS loadbalancers in multi-zone setups. Ultimately i've come to the conclusion that large multi-zonal clusters is a mistake. Do several single-zone disposable clusters if you want zone redundancy.
At $WORK traffic between zones ($REGION-DataTransfer-Regional-Bytes) is our second largest cost on our AWS bill, more than our EC2/EKS cost. It adds up to mid six figures each year. We try to minimize this where it is easy to do so. For example, our EKS pods perform reads against RDS read replicas in the same AZ only, but you're out of luck for writes to the primary instance. To reduce this in any significant way can eat up a lot of time, and for us, the cost is enough to be painful but not enough to dedicate an engineer to fixing.
This is precisely how Amazon's bread is buttered. An outage affecting an entire AZ is rare enough that I would feel pretty happy making all our clusters single-AZ, but it would be a fool's errand for me to convince management to go against Amazon's official recommendations.
I would LOVE to pitch something else I'm working on that is solving this problem in EKS, cross zone data transfer.
It's a plugin that enables traffic re-direction for any service that is using an IP in any given VPC. If you have say multiple RDS Reader instances, it will first attempt to use local AZ instances first, but the other instances are available if local instances are non-functional. So you do not loose HA or failover features.
The plugin does not require any reconfiguration on your apps. It works similar to Topology Aware Routing (https://kubernetes.io/docs/concepts/services-networking/topo...) in Kubernetes, but it works for services outside of Kubernetes. The plugin even works for non-Kubernetes setup as well.
This AZP solution is fine for services that is have one IP or primary instance, like RDS Writer instance. It does not work for anything that is "stateless" and multi-AZ, like RDS Read-only instances or ALBs.
That was the origin for this solution. A client app had to issue millions of small SQL queries where the first query had to complete before the second query could be made. Millions of MS adds up.
Lowest possible latency would of course be running the client code on the same physical box as the SQL server, but thats hard to do.
In my experience it has been relatively high variance – it does get as low as 0.5, but can be 3-4. That's an order of magnitude difference, and can be the difference between a great and a terrible UX when you amplify it across many RPCs.
In general the goal should be to deploy as much of the stack in one zone as possible, and have multiple zones for redundancy.
> In general the goal should be to deploy as much of the stack in one zone as possible
Agree. The can be a few downsides one has to consider if you have to fail over to another zone. Worst case, there isn't sufficient capacity available when you fail over if everyone else is asking for capacity at the same time. If one uses e.g. karpenter, you should be able to be very diverse in the instance selection process, so that you get at least some capacity, but maybe not the preferred.
I was surprised to. Of course it makes sense when you look at it hard enough, two seperate DCs won't have the same latency than internal DC communication. It might have the same physical wire-speed, but physical distance matter.
As someone not currently using Cloudflare Workers, I'm not sure I want to build a worker and figure out how to interface with it though my existing application just to send email. What happened to SMTP?
Was just about to do a demo, but Google Meet was down. Tried to use Jitsi as a fallback, but couldn't log in because Firebase was down too. Ended up using a Slack Huddle, lol.
It's more than just a few - even more basic things like rate limiting or concurrency controls are gated behind Pro. It works extremely well, but I've been reluctant to use it in open source projects because there's quite a bit in there I'd need to rebuild.
im curious about your use case -- it seems weird (to me) to use it in an open source project unless its some kind of turnkey full app -- is there a way to just release it and encourage people to bring their own oban keys? that way it looks good for the elixir ecosystem that it has found a way to support hybrid open source libraries and expands the obam ecosystem
I've been using Ash for a few side projects and recently started using it at the day job too. We're mostly using it for new functionality, with the Ash APIs alongside our existing ones, but planning on slowly moving older things over too. It's been working well so far, and it's been easy enough to use the escape hatches for anything weird we're doing.
Getting started was a bit tricky though - definitely recommend the Ash book there. It works a lot better than the documentation as an introduction.
I feel like my ideal would be something more hybrid. It's pretty rare that I have a table that I decide upfront should be columnar. It's a lot more common that I want occasional analytics-like queries on my regular tables to not take forever.
We initiall set the rowstore as default, but people wouldn't create columnstore tables and were confused on why performance wasn't improving. So, figured this was cleaner, but you always have the option to switch the default table type back.