Hacker Newsnew | past | comments | ask | show | jobs | submit | navtoj's commentslogin

Hah, "censorship"; it's simple regulation.

US companies follow the law when in US so why not the same for EU?

EU laws only apply when a company is serving EU residents.

Also, background on founder of posted source: https://www.nbcnews.com/tech/internet/michael-benz-rising-vo...


xkcd/927


There's a "How To Use?" section on the GitHub repo page.

https://github.com/theabbie/DoublePendulum?tab=readme-ov-fil...


Yup, human progress is positive-sum in the long run. It only looks zero-sum because of short-term politics.


Positive-sum yes, but win-win not so reliably.

Positive-sum doesn't necessarily mean everyone wins. Just that in aggregate things improve.

In fact, for any significant policy change, there will be losers.

And lots of very long term systemic dysfunction, with powerful interests reinforcing that dysfunction, tilting positive returns toward themselves, and negative returns onto others.


wow.. our society really has a tendency to overcorrect regarding social issues


I can't comment on DEI, I'm not qualified there. I can comment on software eng culture the past twenty years, however.

My take is we, collectively, pride ourselves on staying up-to-date with the latest and best practices. However, that staying up to date tends to be a rather shallow understanding at best. It's as if we read a short summary of the best practice, then cargo cult it everywhere, fully convinced that we're right because it is the current best practice.

The psychological intent is to outsource accountability and responsibility to these best practices. I'd argue that goal isn't always consciously undertaken. I'm not asserting malevolence, but more a reluctance to dig into the firehose of industrial knowledge that gets spewed at us 24/7.

I suspect this is not just confined to software dev. It's a sort of anti-intellectualism, ultimately. And it's hard to cast it as that, because I don't think we should tell people they're wrong for triaging emotional energy. But it also isn't right that we're okay with people generally checking out as much as possible.


yea, i agree — it’s definitely not just a software thing. good intentions don’t always translate into good execution.

i wonder if/when AGI becomes real, could it help with writing better policies/laws since it would have a broader understanding of issues and (hopefully) no bias so it would be able to predict outcomes we can't


> wow.. our society really has a tendency to overcorrect regarding social issues

I don't agree. You're reacting to a one-sided, very partial critique of a policy change that no longer benefitted a specific group and the only tradeoff was a hypothetical and subjective drop of the hiring bar. This complain can also be equally dismissed as members of the privileged group complaining over the loss of privilege.

The article is very blunt in the way their framed the problem: the in-group felt entitled to a job they felt was assured to them, but once the rules changed to have them compete on equal footing for the same position... That's suddenly a problem.

To make matters worse, this blend of easily arguable nitpicking is being used to kill any action or initiative that jeopardizes the best interests of privileged groups.

Also, it should be stressed that this pitchfork drive against discriminate hiring practices is heard because these privileged groups believe their loss of privilege is a major injustice. In the meantime, society as a whole seemed to have muted any concern voiced by any persecuted and underprivileged group for not even having the chance of having a shot at these opportunities. Where's the outrage there?


The undisputed facts at hand are:

* The FAA introduced a bigraphical questionnaire which screened out 90% of applicants.

* The answers to this questionnaire were distributed to members of the National Black Coalition of Federal Aviation Employees.

* Members were explicitly told not to distribute the answers to other people, to reduce competition for admission.

This is as bad a scandal as though the answers to the SAT were leaked.


> I'm... totally at a loss as to you you can get this takeaway from this piece. The undisputed facts at hand are:

This is exactly the kind of one-sided nitpicking I pointed out. You purposely decided to omit the fact that the "biological questionaire" was in fact a change in the way applicants were evaluated, which eliminated the privilege of an in-group to avoid to compete with "walk-ons", i.e., anyone outside of the privileged group. At best you're trying to dismiss the sheer existence of such an evaluation process by putting up strawmen over the implementation of this evaluation.


Is "eliminated the privilege of" some kind of dogwhistle for being racist against white people? You're intentionally using circuitous language but that appears to be the message. People are individual human beings, discrimination on the basis of skin color is evil. Not sure why this is so hard to understand for some people.


[flagged]


But "having their privilege taken away" is a vastly different thing than "answers to a multiple choice test are leaked to an ethnic affinity group".

Furthermore, this also negatives impacted Latin and Asian people. And also Black people that weren't part of the aforementioned affinity group.


I simply responded to the above comment saying eliminating the privilege of white people is a dogwhistle for being racist against white people. It's not. I said nothing about the post, and don't know why you're bringing it up. Please try to keep context in mind so you don't make halfbaked statements.


It’s not?

How non-racist of you (and non-presumptuous) to “eliminate someone’s privilege” based solely on the color of their skin. You do know there are poor and disadvantaged white people too, right? You might even be surprised that they outnumber black people.

And shame on you for even thinking you have the right to make such a call, or even entertain such a notion.

Talk about privilege.


This is wild. Apparently now being against racism is in fact racist. Glad HN finally figured it out.


I feel like often people in your position don't have a basic undestanding why racism is wrong. You don't have a concept or any empathy for how racism affects individual people, all you see is the broad identity group itself. You don't understand the individual core experience of what racism does to people, dehumanizing them, prejudicially dismissing their life and individuality on the basis of skin color. Or at least, you don't doesn't appear to, given that you are guilty of doing this.

Not every white person has "privilege", the advantages typically referred to by this word is about heavily overlapping normal distributions between racial groups. We see statistical level differences in these overlapping curves, but people can be on opposite ends of the curve and that width is greater than the width between races. Ultimately when you boil things down the issue is individuals within systems discriminating against other individuals. In addition, skin color is one axis, there are literally thousands of axes in which one may be privileged, just to name a few examples, how many medical issues you have, the quality of your parents friends, the quality of your early school friends and teachers, whether you're attractive or ugly, many of these things are out of the control of a child and in many cases have a much bigger impact on the quality of your life than skin color, or even the big obvious ones like sex and sexuality.

It's becoming really common for advantaged people to feel justified in being a racist towards disadvantaged people, because the disadvantaged people are white. When this happens i'm not sure how you can see this as a good thing. By assuming every white person has "privilege" to be taken away you are committing racism against individual human beings with complex lives and life experience. Basically, stop! You can fight racism without devolving into racism yourself. I still remember the MLK era speeches about how fighting racism with more racism was unacceptable, we are all human beings with individual humanity, not our skin colors. Not sure what happened that so many people lost the plot.


> This is exactly the kind of one-sided nitpicking I pointed out. You purposely decided to omit the fact that the "biological questionaire" was in fact a change in the way applicants were evaluated, which eliminated the privilege of an in-group to avoid to compete with "walk-ons", i.e., anyone outside of the privileged group. At best you're trying to dismiss the sheer existence of such an evaluation process by putting up strawmen over the implementation of this evaluation.

What you just wrote made no sense whatsoever?


[flagged]


> You purposely decided to omit the fact that the "biological questionaire" was in fact a change in the way applicants were evaluated

> * The FAA introduced a bigraphical questionnaire which screened out 90% of applicants.

???

> which eliminated the privilege of an in-group to avoid to compete with "walk-ons", i.e., anyone outside of the privileged group

> The answers to this questionnaire were distributed to members of the National Black Coalition of Federal Aviation Employees.

??????


> You purposely decided to omit the fact that the "biological questionaire" was in fact a change in the way applicants were evaluated

Man, you are now losing audiences that are sympathetic to your position. Are you accusing Manuel_D of edit-sniping you? Or are you claiming that the comment as it is currently written omits the above fact?


For transparency, yes, I did remove that first sentence a few minutes after posting (but before the reply was posted). I felt it was too harsh in tone. I don't remember changing "biological" to "bigraphical"


> equal footing

So, the candidates who were not members of some racially based association also got access to the answers to the first test?


> once the rules changed to have them compete on equal footing for the same position... That's suddenly a problem.

It wasn’t on equal footing, so your entire post is based on either a misunderstanding or you’re just blatantly trolling in which case well done, I totally bit.


Working on a macOS app for the empty space around the MacBook notch.

https://github.com/navtoj/NotchBar

Need some ideas for what kind of widgets would be useful...


NPM still hasn't fixed the "*" package version bug on their end.

https://www.youtube.com/watch?v=IzqtWTMFv9Y&t=465


It's intentional and is not a bug. "Fixing" it could cause someone's build to break by someone unpublishing a version.


A softer interpretation is possible. "*" should mean that the dependency cannot unpublish all packages and must have at least one version available.


If someone has an older version in the lock file this will break the build.


Which they can fix by updating the lock file.


If you use wildcard to specify any version of a dependency you shouldn't be surprised if something breaks.


The same can be said about not pinning to a specific version as even some patch releases can break things or change performance characteristics.


This, to me, is still one of the most astonishing things about the JavaScript ecosystem that everyone just accepts.


Not a big deal, just specify exact dependency versions. Curious how you think other package managers handle this better than npm.


I'm not saying that other package managers handle it better - if authors wilfully misrepresent the state of their software, it is indeed not the remit of the package manager to correct them. If you started down that road, you'd probably end up with a library of tests (executed in the package manager's registry) to guarantee a non-breaking change, and at that point you have to trust the package author that the tests are indeed accurate, which is basically equivalent to trusting them to write the correct `version` string (unless you auto-generate the tests, which is an interesting idea but probably impractical).

I'm saying that the fact that it is (apparently) the norm in JavaScript-world that authors will regularly publish breaking changes that are not advertised as such, and that that is just an acceptable everyday uncommentworthy inconvenience, is surprising to me. How do y'all get anything done if you can't even trust SemVer enough to automatically pull in minor/patch updates to dependencies!?


It's not common at all. It can happen, but it's very rare. And it's basically never intentional.

In my experience the most common cause of breaking changes is accidentally breaking on older versions of the runtime, because the project is only running tests on the last version or two. Aside from that, the only notable example I can think of in the last year was a pretty subtle bug in what was supposed to be a pure performance optimization in a query language [1]. I think these are pretty representative, and not meaningfully worse than the experience in other languages.

[1] https://github.com/estools/esquery/pull/138


Huh. I have got the wrong impression, then, from various blogs/articles which suggest never relying on SemVer because it's regarded as as-good-as-useless. Thanks for setting me straight!


And on my team we pin exact versions and use semver to inform the level of scrutiny when we manually update packages. Probably hasn't prevented any issues, but it helps folks sleep at night knowing our code doesn't change unless we tell it to.


What is your suggestion for improving it? I can accidentally publish a breaking bug in my patch release, and I might not notice.


I believe there is no process or tool that could reliably do so (see sibling comment[0]). Indeed, at some point you need to trust an author that what they are publishing is what they say they are publishing, and authors being fallible means that mistakes _might_ slip by.

What I'm surprised by is the apparent cultural norm that this is just a regular everyday occurrence which entirely erodes any faith in the meaning of SemVer. Sure, we cannot 100% trust SemVer (because humans are fallible) - but there is a world of difference between trusting it ~99.9% and 0%. The JavaScript community (from the outside! I could be wrong!) seems to have simply accepted the 0% situation, and all the extra toil that goes along with it, rather than trying to raise the bar of its contributors to be better.

[0] https://news.ycombinator.com/item?id=38906936


I don’t think this is quite true. I can expect semver to work correctly in about 70% of all instances (working with JS/TS every day).

Biggest issues are authors that keep their libraries at 0.x forever (every minor chance can be a breaking one) and the ones that release a new major version every other week.

The times I do a minor update and something breaks are generally regarded as a bug by authors too.


Pinning to a specific version doesn't protect against the author unpublishing that version.

The problem with the `*` bug is that it means you can stop anyone from unpublishing future versions of their package by simply creating a package that depends on it with a `*` identifier and publishing that to the registry.


> Pinning to a specific version doesn't protect against the author unpublishing that version.

It does if your project is also in the npm public registry and the package you're dependent on is more than 72 hours old.

https://docs.npmjs.com/policies/unpublish


As this package clearly demonstrates, it's a broken design.


The way unpublishing works is broken. It would be better if unpublish would just hide the version. Then it would not matter if someone unpublished something with dependencies.


That's how crates.io does it with yanked releases.

It's removed from the index, and cargo will only download it using a pre-existing lock file


Golang’s cryptographically verified module proxy cache solves this problem nicely.


The Rome Statute limits the "Crimes within the jurisdiction of the Court".

Copyright infringement is not included.

https://www.ohchr.org/en/instruments-mechanisms/instruments/...


> While we were busy fixing the linker to save 1MB, iOS 15 launched and quietly gave us 35MB more.

why didn't the ios team do this before? 15 MB seems very low for network extensions


That's plenty, especially for stuff like VPN clients.

Many of the protocols you'll be talking were created when computers had maybe 32MB of RAM or less, and you wanted to do more on them than merely connect to a VPN.

Lots of network hardware still in use nowadays doesn't have much more RAM than that either.

Even now, after twenty years of continued development, an openvpn client with all of its shiny new features enabled only uses about 6MB-7MB on my computer.


Their problem was using Go in first place, but since they didn't want to add a language to their knowledge pool, they went on with this "solution".

And we all know how fat Go binaries tend to be.


I know you mean it in jest, but language choice is far more than 'just' knowledge.

In see them young engineers bringing their new fancy stuff, and of course hitting walls everywhere, walls that we (old farts) sweated buckets to handle on our old. Dependency management (not only the libs, packages, but also the underlings OS, or libc, or even ISA...) and upgrade mills, training costs all over, coding standards to draw up, peer review tools and practices to upgrade, interfacing options with other projects to normalize, perf practices to upgrade... I'm forgetting many, and I have this email ready for every new language or tech, with all the hurdles to clear (and the post-mortem costs of clearing them in the past). Do the work, show your org and peers you thought about it seriously and how much we'll win.

If then the juice is worth the squeeze, by all means, let's go. It happens, a lot, but it can't be an individual choice for the whole org, or 'it's just a language'.

Sorry for the Sunday rant...


Indeed, however if Apple platforms are a deployment target, the least I would expect from any technical architect would be some kind of Objective-C or Swift knowledge.

This post is one good example of the implications of targeting platforms with languages outside of the ones provided in the SDK.

However as you say that is a lesson most young engineers have to go through, I also went through it.


> I have this email ready for every new language or tech, with all the hurdles to clear (and the post-mortem costs of clearing them in the past)

I would love for you to share this. (even if it's not english)


In the end, it turned out to be more of a challenge than a problem, and resulted in improving the Go toolchain for everyone - bringing Go closer to being universally viable for this type of work.

Interesting journey, and a solution that benefits the broader community, not just the authors. It's a win-win in my book.


Just like Facebook betting in PHP ended up with Hippo and Hack.

Yet Facebook has learned it wasn't the best way of spending resources and nowadays most of their backend infrastructure is using C++, Java and whatever.


If you've been on the internet for a while, you've probably heard that iOS devices have far less memory than their Android counterparts. The main reason this works without problems is that the OS team is very, very conservative with letting people use memory. That this works is a testament to iOS's strategy of scaling up from the bottom rather than just throwing resources at a technical problem.


Fun anecdote. I'm in a middle of writing an ML app for android that uses ncnn on top of Vulkan. In that app I need to decide how big chunks of work the model will handle at a time. When ncnn on Vulkan is asked for memory available, on a desktop platform it dutifully let's you know how much vram your GPU has. On android the answer always seems to be 3.5GB. 3.5GB of VRAM on a smartphone? No way. Perhaps it is a quirk of all the hw I tested on so far or a bug. In fact the hardware GPU shares system memory, but I highly doubt I could actually use 3.5GB.


Considering this is Android, perhaps you can find the code responsible for returning this value?


Yes, I did. Unfortunately it appears to just return a hardcoded number on Android. So I can't do much with this without knowing some vulkan API I could use. Perhaps if this becomes a real issue I'll spend more time on it and I'll find one.

The ml framework I use (ncnn) uses vulkan beneath.


True, however and without wanting to steel this thread, there is one good thing about Google's strategy, it shows a glimpse of how Longhorn could have been like as managed OS, if the teams actually worked together instead of WinDev defending their feud and redoing the whole stuff in COM, and now we are stuck with it.

So in some sense, it is the kind of strategy that is required when needed to prove a point instead of trying to convince others just with words, which is always a big pain point when trying to talk about better approaches to secure software development.


Because Apple went to use that API and hit that restriction themselves for the first time when they shipped Apple Private Relay and found it was too tight.


Can you substantiate this?


I was disappointed with that ending myself. Someone has to apply pressure to keep bloat under control. That pressure would be most effective from an entity as powerful as Apple. The modest specs of the iPhone itself used to provide that pressure, but now of course they have to keep selling next year's more capable model.


Our industry was once capable of building entire operating systems that ran in significantly less than that.


Because those were simpler operating systems and there were tight memory limitations. The industry can still do it if needed.

For an iPhone 12 (which shipped with iOS 14) just the 2 framebuffers for a double buffered display will take up the 15 MiB. Operating systems that are "significantly less than" 15 MiB would not be suitable to run a modern personal computing device.


SGI Irix and NeXTSTEP, simpler operating systems...


Why is/was this comment greyed out while it is a good example of complex OSs which in their time were criticised for being heavy while being able to run in the L3 cache of modern CPUs?

Layer upon layer upon yet another layer of abstraction have made it so nobody sees the turtles the whole thing rests on...


A normal icon / avatar today is often recommended to be 512x512 pixels = 262 144 pixels

Back in those days a screen was often 640x480 = 307 200 pixels

An iPhone 12 Mini has a 1080×2340 screen = 2 527 200 pixels = over 8 times the size of those old screens

Today's OS:es work with much much larger datasets overall, not just in graphics but elsewhere as well, and are optimized for that


But if they use that additional quota, doesn't that break compatibility with older devices?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: