Swapping in a multi-vendor world also implies standardization. It’s too early in this technology to standardize - that would stifle progress.
Swap stations would also be far more expensive to build and manage inventory for, and would carry higher liability for whatever automated mechanism moved the thousand pound pack packs around when it e.g. went out of alignment and crushed somebody’s car frame.
Yea, totally - I found this to be a lesser problem in cases like Tesla, where it's all proprietary anyway. You're correct that in a world of many standards, it becomes an issue
If CSV were being used just to exchange data with Excel, we probably wouldn't be using CSV. Many systems neither need nor know that ="01" should be treated as the string "01".
If Excel were the only intended consumer, .xlsx would be a preferable file format. At least it's mostly unambiguous.
That's a good point. But then you're not benchmarking C++ as a distinct language. So what would sufficiently distinguish a C++ program from a C program? Let's assume it's not just minor incompatibilities introduced to prevent compilation by a C compiler.
There must have used some definition that is not explicit in the paper, but you can see in this code sample that the author used various C++ standard data types (std::string, std::array), iterators, classes, concurrency (std::thread). I'm no judge of C++ style, but perhaps it's "C++ as a C++ developer circa 1997 would have written it".
Ha! I picked up the original project (dosemu1) from an abandoned skeleton created by Matthias Lautner. I was a larval programmer and Linux was very much in its infancy. I really wanted to run Civilization on my one computer without keeping the DOS partition around -- staying on 24/7 Linux was a point of pride back then. While I got quite a few programs running well under dosemu, I never had much time to play Civilization after picking up the project.
Thankfully, when I got out of my depth and then joined a fledgling new ISP (MindSpring) as the first engineer, the worthy James MacLean took over and turned it into a solid system. Not sure why I'm not listed in the THANKS, but perhaps it's revenge for the poor code quality I left behind. :-)
It was one of the most enjoyable projects of my career, despite or perhaps because of how little I knew and how much I had to learn.
Can you expand on this: "AWS healthchecks each kubernetes node, but not your pods themselves".
Are you talking about a keepalive connection to an unhealthy pod which is reused for multiple requests? So the failure modes are, if I understand you correctly, a) the ALB keeps sending requests through an established keep-alive HTTP connection which terminates in an unhealthy pod, but which it sees as healthy because the node is healthy and can route traffic to another, healthy pod, and b) the health of an established HTTP keepalive connection is perceived to be that of the node rather than the destination pod, so nodes which become unhealthy can cause the ALB to unnecessarily terminate a keepalive connection.
We had to switch to using target-type=instance because of issues with pods not being deregistered. I'd prefer to use target-type IP but it seemed like preventing 500s on rollouts required a bit of testing and tuning with a very specific approach. e.g. introducing a longish delay on pod termination with a lifecycle hook and using the pod readiness gate support recently added to alb-ingress-controller.
You've got it exactly right. Your problem of pods not being deregistered is a real problem, but also with a quick fix: The default "Deregistration delay" for ALBs is 300 seconds but for kubernetes pods the TerminationGracePeriod defaults to 60 seconds. This means that your load balancer keeps trying that pod for 4 whole minutes after it's been hard-shutdown.
Fewer features and fewer lines of code, and those LOC are written in Go, which is the language in which CockroachDB is written and which, presumably, for which their team and tooling are best optimized. It's a reasonable thesis.
Because EKS supports custom launch templates? Good luck trying to finagle that into supporting the exact Kubelet flags that you want to enable, while staying abreast of upstream updates so that your cluster doesn't break when AWS tries to keep it up-to-date. Not anywhere close to a simple "extra_kubelet_flags: array[text]" kind of field.
That may be more likely with limits, but it doesn’t require a limit. I’ve had lots of fun with that in Elasticsearch pods with no limit. And then you get to enjoy a nice cascading failure.