Positive-sum doesn't necessarily mean everyone wins. Just that in aggregate things improve.
In fact, for any significant policy change, there will be losers.
And lots of very long term systemic dysfunction, with powerful interests reinforcing that dysfunction, tilting positive returns toward themselves, and negative returns onto others.
I can't comment on DEI, I'm not qualified there. I can comment on software eng culture the past twenty years, however.
My take is we, collectively, pride ourselves on staying up-to-date with the latest and best practices. However, that staying up to date tends to be a rather shallow understanding at best. It's as if we read a short summary of the best practice, then cargo cult it everywhere, fully convinced that we're right because it is the current best practice.
The psychological intent is to outsource accountability and responsibility to these best practices. I'd argue that goal isn't always consciously undertaken. I'm not asserting malevolence, but more a reluctance to dig into the firehose of industrial knowledge that gets spewed at us 24/7.
I suspect this is not just confined to software dev. It's a sort of anti-intellectualism, ultimately. And it's hard to cast it as that, because I don't think we should tell people they're wrong for triaging emotional energy. But it also isn't right that we're okay with people generally checking out as much as possible.
yea, i agree — it’s definitely not just a software thing. good intentions don’t always translate into good execution.
i wonder if/when AGI becomes real, could it help with writing better policies/laws since it would have a broader understanding of issues and (hopefully) no bias so it would be able to predict outcomes we can't
> wow.. our society really has a tendency to overcorrect regarding social issues
I don't agree. You're reacting to a one-sided, very partial critique of a policy change that no longer benefitted a specific group and the only tradeoff was a hypothetical and subjective drop of the hiring bar. This complain can also be equally dismissed as members of the privileged group complaining over the loss of privilege.
The article is very blunt in the way their framed the problem: the in-group felt entitled to a job they felt was assured to them, but once the rules changed to have them compete on equal footing for the same position... That's suddenly a problem.
To make matters worse, this blend of easily arguable nitpicking is being used to kill any action or initiative that jeopardizes the best interests of privileged groups.
Also, it should be stressed that this pitchfork drive against discriminate hiring practices is heard because these privileged groups believe their loss of privilege is a major injustice. In the meantime, society as a whole seemed to have muted any concern voiced by any persecuted and underprivileged group for not even having the chance of having a shot at these opportunities. Where's the outrage there?
> I'm... totally at a loss as to you you can get this takeaway from this piece. The undisputed facts at hand are:
This is exactly the kind of one-sided nitpicking I pointed out. You purposely decided to omit the fact that the "biological questionaire" was in fact a change in the way applicants were evaluated, which eliminated the privilege of an in-group to avoid to compete with "walk-ons", i.e., anyone outside of the privileged group. At best you're trying to dismiss the sheer existence of such an evaluation process by putting up strawmen over the implementation of this evaluation.
Is "eliminated the privilege of" some kind of dogwhistle for being racist against white people? You're intentionally using circuitous language but that appears to be the message. People are individual human beings, discrimination on the basis of skin color is evil. Not sure why this is so hard to understand for some people.
I simply responded to the above comment saying eliminating the privilege of white people is a dogwhistle for being racist against white people. It's not. I said nothing about the post, and don't know why you're bringing it up. Please try to keep context in mind so you don't make halfbaked statements.
How non-racist of you (and non-presumptuous) to “eliminate someone’s privilege” based solely on the color of their skin. You do know there are poor and disadvantaged white people too, right? You might even be surprised that they outnumber black people.
And shame on you for even thinking you have the right to make such a call, or even entertain such a notion.
I feel like often people in your position don't have a basic undestanding why racism is wrong. You don't have a concept or any empathy for how racism affects individual people, all you see is the broad identity group itself. You don't understand the individual core experience of what racism does to people, dehumanizing them, prejudicially dismissing their life and individuality on the basis of skin color. Or at least, you don't doesn't appear to, given that you are guilty of doing this.
Not every white person has "privilege", the advantages typically referred to by this word is about heavily overlapping normal distributions between racial groups. We see statistical level differences in these overlapping curves, but people can be on opposite ends of the curve and that width is greater than the width between races. Ultimately when you boil things down the issue is individuals within systems discriminating against other individuals. In addition, skin color is one axis, there are literally thousands of axes in which one may be privileged, just to name a few examples, how many medical issues you have, the quality of your parents friends, the quality of your early school friends and teachers, whether you're attractive or ugly, many of these things are out of the control of a child and in many cases have a much bigger impact on the quality of your life than skin color, or even the big obvious ones like sex and sexuality.
It's becoming really common for advantaged people to feel justified in being a racist towards disadvantaged people, because the disadvantaged people are white. When this happens i'm not sure how you can see this as a good thing. By assuming every white person has "privilege" to be taken away you are committing racism against individual human beings with complex lives and life experience. Basically, stop! You can fight racism without devolving into racism yourself. I still remember the MLK era speeches about how fighting racism with more racism was unacceptable, we are all human beings with individual humanity, not our skin colors. Not sure what happened that so many people lost the plot.
> This is exactly the kind of one-sided nitpicking I pointed out. You purposely decided to omit the fact that the "biological questionaire" was in fact a change in the way applicants were evaluated, which eliminated the privilege of an in-group to avoid to compete with "walk-ons", i.e., anyone outside of the privileged group. At best you're trying to dismiss the sheer existence of such an evaluation process by putting up strawmen over the implementation of this evaluation.
> You purposely decided to omit the fact that the "biological questionaire" was in fact a change in the way applicants were evaluated
Man, you are now losing audiences that are sympathetic to your position. Are you accusing Manuel_D of edit-sniping you? Or are you claiming that the comment as it is currently written omits the above fact?
For transparency, yes, I did remove that first sentence a few minutes after posting (but before the reply was posted). I felt it was too harsh in tone. I don't remember changing "biological" to "bigraphical"
> once the rules changed to have them compete on equal footing for the same position... That's suddenly a problem.
It wasn’t on equal footing, so your entire post is based on either a misunderstanding or you’re just blatantly trolling in which case well done, I totally bit.
I'm not saying that other package managers handle it better - if authors wilfully misrepresent the state of their software, it is indeed not the remit of the package manager to correct them. If you started down that road, you'd probably end up with a library of tests (executed in the package manager's registry) to guarantee a non-breaking change, and at that point you have to trust the package author that the tests are indeed accurate, which is basically equivalent to trusting them to write the correct `version` string (unless you auto-generate the tests, which is an interesting idea but probably impractical).
I'm saying that the fact that it is (apparently) the norm in JavaScript-world that authors will regularly publish breaking changes that are not advertised as such, and that that is just an acceptable everyday uncommentworthy inconvenience, is surprising to me. How do y'all get anything done if you can't even trust SemVer enough to automatically pull in minor/patch updates to dependencies!?
It's not common at all. It can happen, but it's very rare. And it's basically never intentional.
In my experience the most common cause of breaking changes is accidentally breaking on older versions of the runtime, because the project is only running tests on the last version or two. Aside from that, the only notable example I can think of in the last year was a pretty subtle bug in what was supposed to be a pure performance optimization in a query language [1]. I think these are pretty representative, and not meaningfully worse than the experience in other languages.
Huh. I have got the wrong impression, then, from various blogs/articles which suggest never relying on SemVer because it's regarded as as-good-as-useless. Thanks for setting me straight!
And on my team we pin exact versions and use semver to inform the level of scrutiny when we manually update packages. Probably hasn't prevented any issues, but it helps folks sleep at night knowing our code doesn't change unless we tell it to.
I believe there is no process or tool that could reliably do so (see sibling comment[0]). Indeed, at some point you need to trust an author that what they are publishing is what they say they are publishing, and authors being fallible means that mistakes _might_ slip by.
What I'm surprised by is the apparent cultural norm that this is just a regular everyday occurrence which entirely erodes any faith in the meaning of SemVer. Sure, we cannot 100% trust SemVer (because humans are fallible) - but there is a world of difference between trusting it ~99.9% and 0%. The JavaScript community (from the outside! I could be wrong!) seems to have simply accepted the 0% situation, and all the extra toil that goes along with it, rather than trying to raise the bar of its contributors to be better.
I don’t think this is quite true. I can expect semver to work correctly in about 70% of all instances (working with JS/TS every day).
Biggest issues are authors that keep their libraries at 0.x forever (every minor chance can be a breaking one) and the ones that release a new major version every other week.
The times I do a minor update and something breaks are generally regarded as a bug by authors too.
Pinning to a specific version doesn't protect against the author unpublishing that version.
The problem with the `*` bug is that it means you can stop anyone from unpublishing future versions of their package by simply creating a package that depends on it with a `*` identifier and publishing that to the registry.
The way unpublishing works is broken. It would be better if unpublish would just hide the version. Then it would not matter if someone unpublished something with dependencies.
That's plenty, especially for stuff like VPN clients.
Many of the protocols you'll be talking were created when computers had maybe 32MB of RAM or less, and you wanted to do more on them than merely connect to a VPN.
Lots of network hardware still in use nowadays doesn't have much more RAM than that either.
Even now, after twenty years of continued development, an openvpn client with all of its shiny new features enabled only uses about 6MB-7MB on my computer.
I know you mean it in jest, but language choice is far more than 'just' knowledge.
In see them young engineers bringing their new fancy stuff, and of course hitting walls everywhere, walls that we (old farts) sweated buckets to handle on our old. Dependency management (not only the libs, packages, but also the underlings OS, or libc, or even ISA...) and upgrade mills, training costs all over, coding standards to draw up, peer review tools and practices to upgrade, interfacing options with other projects to normalize, perf practices to upgrade... I'm forgetting many, and I have this email ready for every new language or tech, with all the hurdles to clear (and the post-mortem costs of clearing them in the past). Do the work, show your org and peers you thought about it seriously and how much we'll win.
If then the juice is worth the squeeze, by all means, let's go. It happens, a lot, but it can't be an individual choice for the whole org, or 'it's just a language'.
Indeed, however if Apple platforms are a deployment target, the least I would expect from any technical architect would be some kind of Objective-C or Swift knowledge.
This post is one good example of the implications of targeting platforms with languages outside of the ones provided in the SDK.
However as you say that is a lesson most young engineers have to go through, I also went through it.
In the end, it turned out to be more of a challenge than a problem, and resulted in improving the Go toolchain for everyone - bringing Go closer to being universally viable for this type of work.
Interesting journey, and a solution that benefits the broader community, not just the authors. It's a win-win in my book.
Just like Facebook betting in PHP ended up with Hippo and Hack.
Yet Facebook has learned it wasn't the best way of spending resources and nowadays most of their backend infrastructure is using C++, Java and whatever.
If you've been on the internet for a while, you've probably heard that iOS devices have far less memory than their Android counterparts. The main reason this works without problems is that the OS team is very, very conservative with letting people use memory. That this works is a testament to iOS's strategy of scaling up from the bottom rather than just throwing resources at a technical problem.
Fun anecdote. I'm in a middle of writing an ML app for android that uses ncnn on top of Vulkan. In that app I need to decide how big chunks of work the model will handle at a time. When ncnn on Vulkan is asked for memory available, on a desktop platform it dutifully let's you know how much vram your GPU has. On android the answer always seems to be 3.5GB. 3.5GB of VRAM on a smartphone? No way. Perhaps it is a quirk of all the hw I tested on so far or a bug. In fact the hardware GPU shares system memory, but I highly doubt I could actually use 3.5GB.
Yes, I did. Unfortunately it appears to just return a hardcoded number on Android. So I can't do much with this without knowing some vulkan API I could use. Perhaps if this becomes a real issue I'll spend more time on it and I'll find one.
The ml framework I use (ncnn) uses vulkan beneath.
True, however and without wanting to steel this thread, there is one good thing about Google's strategy, it shows a glimpse of how Longhorn could have been like as managed OS, if the teams actually worked together instead of WinDev defending their feud and redoing the whole stuff in COM, and now we are stuck with it.
So in some sense, it is the kind of strategy that is required when needed to prove a point instead of trying to convince others just with words, which is always a big pain point when trying to talk about better approaches to secure software development.
Because Apple went to use that API and hit that restriction themselves for the first time when they shipped Apple Private Relay and found it was too tight.
I was disappointed with that ending myself. Someone has to apply pressure to keep bloat under control. That pressure would be most effective from an entity as powerful as Apple. The modest specs of the iPhone itself used to provide that pressure, but now of course they have to keep selling next year's more capable model.
Because those were simpler operating systems and there were tight memory limitations. The industry can still do it if needed.
For an iPhone 12 (which shipped with iOS 14) just the 2 framebuffers for a double buffered display will take up the 15 MiB. Operating systems that are "significantly less than" 15 MiB would not be suitable to run a modern personal computing device.
Why is/was this comment greyed out while it is a good example of complex OSs which in their time were criticised for being heavy while being able to run in the L3 cache of modern CPUs?
Layer upon layer upon yet another layer of abstraction have made it so nobody sees the turtles the whole thing rests on...
US companies follow the law when in US so why not the same for EU?
EU laws only apply when a company is serving EU residents.
Also, background on founder of posted source: https://www.nbcnews.com/tech/internet/michael-benz-rising-vo...