This is most likely due to the fact that it is really bad at resetting blinker when the steering wheel is straight’ish again.
Extremely annoying as any other car is much more sensitive (and sensible).
In a tesla an on-ramp to straight highway is rarely enough to stop the blinker, something I’ve never experienced in any other car.
Couple this with, IMO, the best baseline speaker system of any manufacturer… I’ve been driving with the blinker on for several kilometers at times!
Even if we still make a mess I think centralization of the mess is better than distributing it - what I mean is that polluting cities where millions sleep, eat, drink and breathe will probably be worse, net effect, than containing energy pollution to select places.
Running EVs in densely populated regions is probably a lot better for the population on the whole even if the net pollution would stay the same, IMO.
Still no EV is even better, but we’ve created a world where transport is often required so, one step at a time I guess.
Using AI doesn’t really change the fact that keeping ones and zeroes in check is like trying to keep quicksand in your hands and shape it.
Shaping of a codebase is the name of the game - this has always been, and still, is difficult. Build something, add to it, refactor, abstraction doesn’t sit right, refactor, semantics change, refactor, etc, etc.
I’m surprised at how so few seem to get this. Working enterprise code, many codebases 10-20 years old could just as well have been produced by LLMs.
We’ve never been good at paying debt and you kind of need a bit of OCD to keep a code base in check. LLM exacerbates a lack of continuous moulding as iterations can be massive and quick.
I was part of a big software development team once and that necessity I felt there, namely, being able to let go of the small details and focusing on the big picture is even more important when using llms.
The problem is most likely not writing the actual code, but rather understanding an old, fairly large codebase and how it’s stitched together.
SO is (was?) great when you where thinking about how nice a recursive reduce function could replace the mess you’ve just cobbled together, but language x just didn’t yet flow naturally for you.
The argument is perhaps ”enshittification”, and that becoming reliant on a specific provider or even set of providers for ”important thing” will become problematic over time.
As go feels like a straight-jacket compared to many other popular languages, it’s probably very suitable for an LLM in general.
Thinking about it - was this not the idea of go from the start? Nothing fancy to keep non-rocket scientist away from foot-guns, and have everyone produce code that everyone else can understand.
Diving in to a go project you almost always know what to expect, which is a great thing for a business.
Same here, but Azure. About 90% saved, with a very similar stack.
It is a great big cloud play to make enterprises reliant on the competency in their weird service abstractions, which is slowly draining the quite simple ops story an enterprise usually needs.
Although we’re using temporal to schedule the workflows, we have a full-code typescript CI/CD setup.
We’ve been through them all starting with Jenkins ending with drone, until we realized that full-code makes it so much easier to maintain and share the work over the whole dev org.
No more yaml, code generating yaml, product quirk, groovy or DSLs!
Of all the paas providers Azure have the worst abstractions and services.
In general I think it’s sad that most buy in to consuming these ”weird” services and that there’s jobs to be had as cloud architects and specialists.
It feeds bad design and loose threads as partners have to be kept relevant.
This is my take on the whole enterprise IT field though!
At my little shop of 30 so developers, we inherited an Azure mess, built abstractions for the services we need in a more ”industry standard” way in our dev tooling, and moved to Hetzner after a couple of years.
A developer here knows no different, basically - our tooling deals with our workflows and service abstractions, and these shouldn’t change just because new provider.
1/10-th of the monthly bill, and money partly spent on building the best DX one can imagine.
Great trade-off, IMO!
Only two cases come to mind for using big cloud:
- really small scale: mvp style
- massive global distribution with elasticity requirements.
Two outliers looking at the vast majority of companies out there.
In a tesla an on-ramp to straight highway is rarely enough to stop the blinker, something I’ve never experienced in any other car.
Couple this with, IMO, the best baseline speaker system of any manufacturer… I’ve been driving with the blinker on for several kilometers at times!
reply