Hacker Newsnew | past | comments | ask | show | jobs | submit | gregfurman's commentslogin

Have you heard of LocalStack’s AWS emulator? [1] It’s runnable within a docker container and has a high fidelity S3 service.

Disclosure, I’m a SWE at LocalStack.

[1] https://github.com/localstack/localstack


I like(d) LocalStack fine, but you've got the same kind of problem that Minio has.

> As a result of this shift, we cannot commit to releasing regular updates to the Community edition of LocalStack for AWS.

https://blog.localstack.cloud/the-road-ahead-for-localstack/


That’s fair. There’s still a free tier that you can use.

https://blog.localstack.cloud/the-road-ahead-for-localstack/...

But IMO, LocalStack community’s S3 service is pretty stable, so I’m doubtful there’ll be much parity drift in the short to medium term.


Fluvio looks awesome!

Any chance you’re going to be reviving support for the Kafka wire protocol?

https://github.com/infinyon/fluvio/issues/4259


We have had folks over the years asking us about the Kafka wire compatibility. We had a project 3 years ago which we archived. I think we have a case for reviving it in the near future.


Discovered this sometime last year in my previous role as a platform engineer managing our on-prem kubernetes cluster as well as the CI/CD pipeline infrastructure.

Although I saw this dissonance between actual and assigned CPU causing issues, particularly CPU throttling, I struggled to find a scalable solution that would affect all Go deployments on the cluster.

Getting all devs to include that autoprocs dependency was not exactly an option for hundreds of projects. Alternatively, setting all CPU request/limit to a whole number and then assigning that to a GOMAXPROCS environment variable in a k8s manifest was also clunky and infeasible.

I ended up just using this GOMAXPROCS variable for some of our more highly multithreaded applications which yielded some improvements but I’ve yet to find a solution that is applicable to all deployments in a microservices architecture with a high variability of CPU requirements for each project.


There isn't one answer for this. Capping GOMAXPROCS may cause severe latency problems if your process gets a burst of traffic and has naive queueing. It's best really to set GOMAXPROCS to whatever the hardware offers regardless of your ideas about how much time the process will use on average.


You could define a mutating webhook to inject GOMAXPROCS into all pod containers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: