If you are replacing a performant property with a slow network call, you are being negligent if you aren't reviewing all the callers to make sure that is okay.
In a typical ORM when you do say $Model->DateTime = 1732046457; the __set is actually checking the value and seeing oh it's an integer. Treat this as a unix timestamp. But when you run save() the query is actually converting that to a Y-m-d H:i:s (for MySQL/MariaDB). This doesn't actually happen until you run save() when it makes the query and runs the network call. Most of that time it's actually storing everything in memory.
But you might want to support string date times and the PHP object DateTime as well. So a typical well designed ORM converts multiple PHP types into a SQL string for saving. That's what the historical __set and __get are all about. This is called "mapping" by most ORMs and a very well designed modern one like Doctrine will actually have mappings for different SQL storage engines available.
Some light weight ORMs do not require you to define all the properties ahead of time. They pull the field names from the table and generate the model record on the fly for you. This generally lets you prototype really fast. Laravel's Eloquent is known to do this. It's also useful for derived SQL fields when you use custom queries or join from other tables. Also kinda fun to do it for properties backed by a method. As in $record->isDirty will run isDirty() under the hood for you. All these things can be documented with PHPDoc and static analyzers handle it just fine.
A method is supposed to be an action and a property is supposed to be data. So I don't see the desire to disguise setting data as a "setting method" rather than using the syntax of assignment.
> A method is supposed to be an action and a property is supposed to be data.
I agree! That's why it's wild to allow a setter to do literally anything.
You aren't just setting a property, there is a conversion happening under the hood.
And the reason I hate it is that code that appears to be infallible, isn't. It can have arbitrary side effects, raise exceptions, have unbounded runtime, etc.
There are many ways to make code misleading. You can write a method called plus and have it do multiplication. You can write a method that sounds safe but looks dangerous. Every language relies on programmers exercising judgement when writing the program.
In a lot of contexts you have something that requires a bit of code but really does behave like a property access, where it's more misleading to make it look like a method call than to make it look like a data access. E.g. using an ORM hooked up to SQLite embedded in the program. Or accessing properties of objects that you're using an EAV system or array-of-structs layout to store efficiently in memory. Or a wrapped datastructure from a C library that you're binding.
Of course if you make something look like a property that doesn't behave like a property then that's confusing. Programmers have to exercise judgement and only make things look like properties when they behave like properties. But that's not really any different from needing to name methods/objects/operators in ways that reflect what they do.
It's abstraction. Your not supposed to care that the setter is doing anything. The class is providing you an interface -- what it does with that interface is not your concern. I hate to quote Alan Kay but all objects should just respond to messages. Whether that message corresponds to a method or a property is pure semantics.
I sometimes use getters and setters to provide backwards compatibility. What was just maybe a simple data field a decade ago doesn't even exist anymore because some major system has changed and we aren't rewriting every single application when we can provide the values that they need easily enough.
If you know that setters exist then you already know that the code can do more. It's not a huge mental leap from property setting to method calls. You should never assume anything is infallible. I don't think classes should even expose raw fields.
It's a matter of OOP modeling. Object methods are better reserved for performing actions with side effects, or complex logic or calculations, and not for getting state or simply setting public properties; and as a caller, I don't really care about the implementation details of (in my above example) getting the last forums post, I just care that it's a property I can access. (Maybe it came from a cache and not the database? Maybe it was set earlier in the script? I don't care.)
Putting it behind a getter doesn't "hide" control flow. It just makes for a cleaner interface from the caller's perspective.
FWIW, I almost never use setters. Getters are are much more useful, especially for lazy-loading basic properties.
No, it's a long deprecated JVM that is effectively abandoned. But it's still a good minimum target as all of its successors tend to be backwards compatible. But you'd typically use one of the more recent LTS versions.
Of course there are quite a few crypto libraries out there, including several JWT related ones. A challenge in that space is that there have of course been some platform changes over the years. Not all of the newer algorithms are supported in older JDKs. And some of the alternatives are a bit complicated in terms of dependencies. I've had some issues with some of Google's libraries dragging in everything and the kitchen sink in terms of dependencies and then causing conflicts with my other dependencies when one of those dependencies changes their API.
This one looks like it builds on bouncycastle and not much else. Which explains how it is not really that dependent on what comes with the JDK as bouncy castle provides its own implementations for a lot of the popular crypto stuff.
The new code was written by a different department, so I'm a little hazy on the details, but I doubt it. It's a 25 years old line of business app, and it's in violation of every guideline, rule and law it touches, and only alive because it got grandfathered through various certifications and audits.
99% of the work was redesigning the business process it supports to be in compliance, and then write a new LOB app that takes inspiration from the old one, but works with the new process.
You might be able to use AI for the remaining 1% mind- and pointless boilerplate code Java requires for no good reason.
Trust me, you don't want that job :D I've been fighting to get our services to Java 21 from Java 8/11 for the last year or so, I've been partially successful but our main API (and 1/2 monolith) is still a dropwizard app on Java 11
Often it is significantly cheaper to pay for a longer term supported release of an older Java version than it is to upgrade your application to use a later release. So yeah, kinda?
It brings a new layer of complexity, which means more surface area for bugs and vulnerabilities.
It’s often easier for developers to write a script using standardized tooling than dig into deep configuration docs for a complex application you’re not familiar with, but that’s where the benefits stop. Configuring built-in functionality makes sense for the same reason using an existing framework/os/etc authentication system makes sense. It often seems like way more effort to learn how to use it than rolling your own simple system, but most of that complexity deals with edge cases you haven’t been bitten by yet. Your implementation doesn’t need to get very big or last very long before the tail of your pipeline logic gets unnecessarily long. Those features don’t exist merely because some people are scared of code.
Additionally, if you’re just telling the application to do something it already does, writing code using general purpose tools will almost certainly be more verbose. Even if not, 10 to 1 is fantastically hyperbolic. And unless you write code for a living— and many dev ops people do not— defining a bunch of boolean and string values to control heavily tested, built-in application functionality (that you should understand anyway before making a production deployment) requires way less mental overhead than writing secure, reliable, production-safe pipeline code that will then need to be debugged and maintained as a separate application when anything it touches gets changed. Updating configuration options is much simpler than figuring out how to deal with application updates in code, and the process is probably documented in release notes so you’re much more likely to realize you need to change something before stuff breaks.
This is a hilarious take given the overwhelming number of outages that are caused by "bad config".
If you can't code, then yeah, I bet config is easier. But as a person who codes every day, I much prefer something that I can interact with, test, debug, type check, and lint _before_ I push to prod (or push anywhere, for that matter).
Ok… so because config outages do happen, that invalidates the points I made? No. So, to use your rhetorical technique, that argument is hilarious given the overwhelming number of outages caused by coding errors.
I’ve been writing code for about 30 years, worked in systems administration for a decade, and worked as a back-end web developer full time for over a decade. I’ve dealt with enough code as business logic, code as config, and parameter configuring to understand that errors stem from either carelessness and/or complexity, which is often a result of bad architecture or interface/api design. The more complex something is, the less careless someone has to be to screw something up. Adding logic to a config unambiguously adds complexity. If you haven’t lost hours of your life to complex debugging only to find it was the wrong operator in an if statement or the like, you’re not a very experienced developer. That’s that. You can have all the machismo you want about your coding skills but those are unambiguous facts.
Developers doing their own ops and systems work have perennially been a liability for system-level architecture, stability, and security from multiuser computing’s inception. That’s why mature organizations have subject matter experts that take care of those things who know that a flat file of parameters and good docs is a whole lot more palatable when the pager goes off at 2am than a 100 line script the mighty brain genius developer made because they wanted to do it “the easy way that verysmart people know how to do” rather than learning how the config system worked.
What i don't understand about 401k is why is there such a strict limit on personal contributions, but employer can contribute significantly more? It's all compensation one way or another, why shouldn't I be able to allocate my compensation how I please?
The IRS doesn't want 401k to be a tax-avoidance scheme for the rich so individual plans are heavily capped. But you should have the same cap as your employer in a plan they administer.
I mean I absolutely detest Peter Thiel and all, but I don't get why you think having and using a Roth IRA (which is, BTW, not a 401k) is a loophole? I max mine out every year and you should too. You should also recognize that it is to your tax advantage to make your riskiest investments with your Roth account, because sometimes high risk brings high reward, which is then untaxed reward, which is the entire point of a why the government both incentivized the creation of Roth accounts, and limited the yearly contribution to a small ammount to begin with. It's not gaming the system, it's the system functioning as intended.
Not trying to be a smartass but the rich have quite enough other very profitable ways to avoid being taxed, where bribing lawmakers is only one of them.
The idea is to replace pensions. An ideal structure as envisioned when these were thought up was probably a fixed size contribution from the company plus substantial matching. That didn’t happen and the things are kinda failures absent actual mandates for employer contributions.
There are two reasons you might not want all retirement savings money to be wholly in control of the saver:
1) Some folks are simply really bad at saving, which ends up being rough for others around them and for society, not only affecting them. This reason tends to rub some folks the wrong way on principle, so they may prefer to disregard it, but it is true as far as it goes (principle aside) [edit: I mean doing anything about this for this reason rubs some folks the wrong way, not that they disagree it’s a real phenomenon]
2) Money directly available to people is freed up for rivalrous zero-sum spending. Think: bidding up scarce resources for your kids, like good schools (which can mean housing). In a world where 100% of comp is employee-directed, this punishes responsible savers.
Regarding point #1, Singapore uses mandatory retirement savings. If you're pedantic it's not much different than a tax. But functionally it's more like workers are mandated to pay a certain percentag into their 401k.
There's some pressure towards matching, in that there are penalties if they find only "key" or "highly-compensated" employees have a lot matched or choose to contribute a lot to the plan.
Income contributed to a 401k is not taxed at the time of earning. Not putting a limit on contributions would mean you could pay very little income tax. A wealthy person making 200k a year could contribute 100k to their 401k and more than halve their total income tax paid.
But OP's point is that the 200k a year person could instead negotiate a 100k salary with an additional 100k into the 401k (by the employer) and that would be allowed.
Edit: Although it does appear there is a cap to the employer's contribution ($69,000 for 2024 [1]). But I think the general point still stands, why bother to have employer and employee limits.
Well… they could negotiate $70k/yr from the employer, and that would be allowed (in 2025) without further employee contributions (as that hits the total max contribution limit).
A $200k/yr employee with no employer contribution would be limited to $23,500 contribution (in 2025 limits).
[edit] actually that’s not quite true, though, because IIRC contribution rules have to be uniform, to avoid horse-shit like maxing out upper management at $70k and contributing nothing for lower-level employees, limiting them to $23.5k tax-advantaged no matter how much hard try to save, I.e. to prevent the whole damn scheme from benefiting mostly the already well-off more than it’s probably going to regardless.
>A wealthy person making 200k a year could contribute 100k to their 401k and more than halve their total income tax paid.
They'll pay it when they take it back out. At best they're saving the difference between the bracket rates in exchange for letting your money slosh around the markets for your working life.
If you are self-employed, you can make the normal employee contributions and your employer (also you) can make the additional contributions of roughly 25% of salary. For 2025 this would be a total of 70k instead of the normal 23k
In all those cases, it sounds like the company would actually suffer the consequences of their prior mismanagement (compared to today where mostly just employees suffer from bad management decisions).
Yes, that means some companies might go under when they could have saved themselves by mass layoffs. I'd be okay with that trade.
Yes, that means growth might slow down to more reasonable levels. I'd be okay with that trade. Europe isn't booming economically like the US, but if you've ever traveled there, their quality of life seems perfectly fine, and costs are much lower.
> but if you've ever traveled there, their quality of life seems perfectly fine
I'm not sure if traveling there is much of an indicator of anything. Doing business there over the course of many years might be a very basic table stakes start to get any idea of what is happening. Even then it will have large blind spots. Most folks traveling to Europe are also traveling to the richest parts of the richest countries and ignoring the rest.
Inertia is a hell of a drug. For how much longer can western Europe stagnate and continue to fall behind the entire world little by little? There are bright spots, but those seem to becoming fewer and further in between. Talk with the younger generations and you may start to get different answers than you expect.
The US system certainly isn't how I'd design things today, but I very much would avoid what the EU is seemingly running headlong into. How much of that has to do with worker protection laws is certainly highly debatable though.
I've grown rather fond of bash in my current role. I work mainly on developer tools and CI pipelines, both of which mean gluing together lots of different CLI tools. When it comes to this kind of work I think it is quite hard to beat the expressiveness of shell scripting. I say this as a former hater of bash and its syntax.
Much credit to copilot and shellcheck, which have made complex bash ever the more write-only language than it already was.
> I've grown rather fond of bash in my current role. I work mainly on developer tools and CI pipelines, both of which mean gluing together lots of different CLI tools. When it comes to this kind of work I think it is quite hard to beat the expressiveness of shell scripting.
Every time I have to express logic in YAML, I miss shell. Shell’s really not great, and it could be improved upon (my vote? Tcl), but it’s so much better than where the industry is these days.
Sure, you can generate the YAML file, but it’s not generally possible to trigger that from the system which wants the YAML file, and you don’t get to integrate with the system’s configuration. Often these systems honestly think that their approach of logic-templated YAML is preferable to a script — for example Helm’s templated Kubernetes YAML.
Few things grind my gears worse than _templated_ yaml. It equivalent of building up a json object using only string concatenation. Most people would raise their eyebrows at that, yet templates yaml is seen as very normal.
I think in part it's a skills mismatch - a lot of devops/sysadmin type folks I encounter, while very talented, are not prolific coders. So code-forward solutions like jsonnet, starlark, dhall, nix, etc. are rather unfamiliar familiar.
It doesn't help that all but one of those languages mentioned are odd little functional languages, increasing the familiarity gap even further.
Bash actually has more warts than competing shells because of its historic stance.
My bugbear is that "alias p=printf" works well in any POSIX shell script, including bash when it is invoked as #!/bin/sh - but when called as #!/bin/bash, the alias (used in a script) fails with an error.
While the Korn shell managed to evolve and comply with the POSIX standard, bash decided to split the personality, so one solution to the above alias failure is to toggle POSIX mode.
Bash was forced to do this, to keep a decade of shell scrips that were written for it working. Pity.
The standard for the POSIX shell looked very hard at Korn, and credits it. Bash is not mentioned.
Thanks, I had no idea. I guess I've never used aliases in scripts, but I would've assumed that they'd just work the same as in interactive mode. Good to know.
Functions don't work everywhere. Bash functions only work in the current shell context unless exported via an `export -f myfun' statement in between the function declaration and downstream sub-shell usage.
Working example:
pzoppin() {
printf 'echo is for torrent scene n00bs from %s\n' "$*"
trap "printf '%s\n' <<< \"$*\"" RETURN EXIT SIGINT SIGTERM
}
export -f pzoppin
echo -e 'irc\0mamas donuts\0starseeds' \
| xargs -0 -n 1 -I {} /usr/bin/env bash -c '
echo hi
pzoppin "$*"
echo byee
' _ {}
The above will fail miserably without the magic incantation:
`export -f pzoppin'
Why'd they design an otherwise perfectly usable, mapless language without default c-style global functions? :)
I typically just give up on weird corners of shell when I find an working version of the same.
For instance -- why would you use "alias" when you can make a function? The syntax is a little weird with functions, but it's a lot more clear what's going on.
The same goes for "test" vs the seeming magic of [ where it seems like [ is language syntax (it's a single character!) when in fact ... it's just another executable that communicates with logic evaluation like anything else (like grep or false).
alias MY_VAR_MODIFIER='local foo=bar'
MY_VAR_MODIFIER () { local foo=bar; }
Calling the alias by the name will set the callee's variable foo, while calling the function does nothing (local foo is local to the function and never leaves scope).
It also works similarly for working with the set ($@). You can do `set --` stuff and it works on the scope of the callee.]
Aliases on most shells also don't need to be fully valid _before_ expansion. You can alias a compound command:
alias foreach='for EACH'
foreach in $MYVAR #perfectly valid for most shells.
Only ksh93 will complain about it, it requires aliases to be complete valid shell programs.
Finally, alias calls don't appear in xtrace (set -x). Only the final expansion will appear.
Gluing together tools with shell scripts is a significant cause of CI failures in my experience. There's no reason to do it. Use a real language - at least Python, but my preference is Deno because it's not dog slow and you don't have to deal with venv.
The subprocess.run args argument is a list of strings, not a single string with whitespace-delimited parts, so you're all good quotingwise. This is now effortlessly a lot better than what you get with bash in terms of how brittle things are!
Supply check=True and the script will barf on subprocess failure. Another useful upgrade.
Take a look at https://sh.readthedocs.io/en/latest/ for a very usable solution to easily run other processes in Python. It has made my life a lot easier whenever I've had to migrate a shell script to Python.
Shellcheck will prevent you from making any egregious whitespace errors. You can dynamically build up an arguments list using bash arrays. Errexit and pipefail options prevent you from ignoring errors.
I agree that some work is better suited to a real programming language, however the driving force isn't the problems you speak of, which are trivial, it's when the logic/control flow of the overall problem becomes more complex.
I get where you are coming from, and I certainly do expect actual facts, data, and reasoning to be a part of any serious postmortem analysis. But those will almost always be in relation to a very specific circumstance. I think there is still room for generalized parables such as this article - otherwise, we would be reading a postmortem blog post, which are also common here and usually do contain what you are asking for.
I think you can generalise without resorting to silly games like the article does. I gave some examples in a sibling comment that are high level enough to give an idea of the types of things I’d think about, without locking in to a specific incident I was part of.
> Ocassionally you’ll have a unicorn situation where there is actually a relatively simple fix, but those are few and far between.
Perhaps we have different backgrounds, but even in late stage startups I find there is an abundance of low hanging fruit and simple fixes. I'm sure it's different at Google, though.
reply