One of the things I remember being discussed on m.d.s.policy not so long ago was making a registry (maybe handled by IANA) for example private keys. Turns out Peter Gutmann did the heavy lifting (thank you Peter) to turn that from idle discussion into a draft document:
From the point of view of m.d.s.policy the main idea is to require CAs to reject these keys in certificates for the Web PKI and encourage software to use these keys (rather than their own examples) in sample code. Basically when you build my-cool-website.example and copy paste the private key from an example into your certificate fetching code, the CA should say "Er, no, you need to actually choose your own private key, that's what the word private means" and this should happen when you copy-paste a high-rated Stack Overflow example, the provided sample code from the library you used etc. Ideally these would all be the same keys is the idea.
It would also make sense for tools to care about these example keys e.g. GitHub could flag code that's checked in and has keys which are not these examples since maybe you used real keys by mistake in your GitHub repo, meanwhile your build-for-production CI tools could reject keys which are on the list because that means you forgot to pick actual keys for the real build.
We actually do this in our internal framework. All secrets (passwords, key material, etc) that we have in documentation and example projects are blacklisted for production in the framework itself and applications refuse to start.
Out of curiosity, do you have any tools/processes to avoid a situation like putting real secrets in new docs? Such as this popular story from 2017 [0]. And is there a way to automate publishing a test secret(s) to a doc and also send it to the blacklist?
None other than peer reviews and habit. All our passphrases are something like “exampledatabasepassword” whereas in reality they are long tokens.
Then, secrets from production are hard to extract and typically rotated (semi-continuously). So it would be very strange if that somehow ended in up in a developers paste buffer to add to the docs.
For what it’s worth, if a private key related to a certificate is exposed and then revoked. That private key is effectively retroactively banned from that CA in its entirety.
I read the document hoping for an example Authorization: Bearer token auth, but was disappointed. Even more surprising was the lack of any ed25519 key; I would also have used this in my documentation.
I wonder if sample code using keys that don't look secure ("1111...") would lead to more or fewer incidents. Easier to guess, but more clear you're using guessable keys to the developer importing the code.
The Mac address part stood out to me, has something changed over the year that made changing Mac addresses more complex?
Back in college our school had some limitations on the internal networks in our dorms. After doing my normal stuff (you know mega downloading,ftp, limewire etc) my net stopped working. Come to find out they had a bandwidth monitor and would just block you for some time if you used too much and had to contact tech support, even for simple things like window updates.
Anyways, somehow I found out I could tweak some net configs in the registry. So here I randomly change some dword value to change my Mac address. I finally got back online and just forgot about it.
A few days later I came back to my room after classes, my roommate told me the school officers raided the room trying to get on my PC. I went to the student dean's office and come to find out I took net access from I think the president and they tracked it to my PC ... Some luck I say lol .. Tech support said that there was no possible way I could change my Mac address after a bit back and forth I decided to just agree with him.
O yeah and they had like a 200-300 page printout of irc chatlogs ,there was more stuff but I'll end it there.
Needless to say my net access was blocked for a year even tho I had a work study job with the webdev department.
It didn't stop me though, I ended up running a long cat5 cable from the dorm next door. Fun times.
Network cards have a factory-configured MAC address, which is the default one the adapter will use. As you figured out, the operating system is usually able to override this MAC address- but it has to redo this every time it initializes the adapter. It'll still have its original MAC on a different system.
However, in the article they did not have control over the operating system. So how do you change the MAC address then? Well, you change the factory-configured MAC built into the device - which requires using the factory configuration tool!
> has something changed over the year that made changing Mac addresses more complex?
It just depends on what the driver/hardware lets you do. Some drivers don't support changing it and some hardware/firmware may just be built in a way that doesn't make changing it (easily) possible. Not being able to change it may be more of a WiFi thing though. I've had the displeasure of dealing with one of those cards. Not sure if it was just the driver.
That's already at the driver level though. There's a couple dozen other manufacturers with drivers in the Linux kernel, and their cards may work entirely differently.
If however your hardware allows you to have sufficiently low level access, your MAC address can be whatever you want it to be. After all, if your device says "this is my MAC address", then that is its MAC address as far as anyone talking to it is concerned.
If your card is a black box and is doing all the work internally (I think that'd be most cards?), then you're at the mercy of what it lets you do.
At the other end of the spectrum you'd have programmable NICs/"FGPA with an Ethernet port".
Ha! We did the same thing on our campus to get around the bandwidth blocks. We had an internal file-sharing network (DC++) which wasn't restricted in any way since it was all internal traffic. In fact the server was run by one of the network admins...go figure. But, content has to come from somewhere outside the network, so I just kept a rotating list of about 20 randomly-generated MAC addresses to keep the pipe open at full speed. I mentioned I was doing this to a friend in the IT department at some point, and he said the university knew about the trick and that I should be careful, but nothing ever came of it.
In reality, the real rate-limit was my college budget and the price of hard drives, but it was all good fun.
Wow, this series of posts is awesome! I have a Kia EV6 and just finding the OBD-II PIDs is a massive pain in the arse. Kia is actually required by EU law to disclose them (Art. 61 EU regulation 2018/858 [0]) but refused to do so even when I wrote to them. Need to follow that up with a complaint to the Spanish type approval authority (which approved the EV6)...
Writing custom software looks like a really promising alternative though, especially if the vehicle's cellular connection can be used. Hell, if the IVI uses the CAN bus, perhaps there's a reverse-engineerable list of PIDs.
PIDs according to J1979? The vehicle itself discloses them in the PIDs dividable by 0x20, where each bit indicates the availability of one of the next 32 PIDs. Each PID is described in great detail by SAE, too, no manufacturer specific information needed for interpretation.
Anything that is not J1979 but purely manufacturer defined though, is a different story, and making that available is the main reason for the existance of the regulation.
To be fair, my use of the terminology might be completely wrong. By "PIDs", I mean in general the identifiers I send to the car to retrieve information.
(It is basically the same head unit, Kias/Hyundais share a lot of components.)
IMO Kia makes great cars but I don't trust them with Internet security. Android device permanently connected to the Internet with unfettered access to the CAN bus of my car....no thank you. And even if it was relatively secure today, I doubt it will get security updates for the 10+ years I expect to have the car, if at all.
I'm glad folks can now sign their own firmware to run on their own cars, but what else is vulnerable in these things?
The car is amazing otherwise, and I'm glad Kia does not seem to use the modem for any anti-features, so the car is totally fine with removing it.
Wow, nice! I don't own a Kia, but being able to disable built-in radio is a great feature in my eyes.
As for people commenting about "nothing to hide" and paranoia, I guess that they never worked on the other side. Seeing how even very innocent-looking data can be (ab)used, I would definitely prefer less data to be gathered about everyone.
That's why I try to never put any testing or development keys in repositories. From those keys sitting there it just takes one act of negligence for the keys to make it into a production environment. It's really frustrating that most people don't care about this at all. Even people forking my own projects would not listen when I told them to please just generate the keys dynamically (for which I included all necessary functionality in the software itself, easily accessible in CI and from the CLI via a simple make command), and instead just put dev keys smack in the repository [1]. And mind you those were some really "security minded" people from the CCC.
This. I always generate my keys or certs or whatever as part of my dev workflow scripts, configure the local environment to read these, and then gitignore the resulting files. This way it just works for the rest of team and the mechanisms are plainly visible in the repo.
There are many ways to do this, but one that often fits what I am doing is to use terraform’s TLS provider [0]. Terraform has become pretty ubiquitous for me. I am probably already using it to power other parts of my dev environment setup and teardown. Adding in a call to a little module that spits out a dynamic CA and certs signed by it is really easy.
(To be fair I suspect the IVI isn't running unless power is turned on, which limits the number of real world threats, but it does potentially offer a new avenue for carjacking)
I've heard of much worse security no-no's. The worst example I'm familar with is of a Fortune-500 corporate database with data for millions of private citizens exposed for years on the public web over plain old http (not even https), without any password or public/private key protection, such that anyone anywhere with the right URL or IP address could access it.
The big question, of course, is: Why haven't we had a security apocalypse? I imagine it must be because most software engineers are honest, and because the bad actors who have the skills and resources to pull it off (e.g., foreign government agencies) have a vested interest in maintaining the status quo.
So, non-technical managers everywhere go on with their lives thinking their software infrastructure has been "secured," without really knowing whether it's true. Consider that no executive or manager at Hyundai was aware until now that their vehicles have been using a previously published private key to update their software.
Kudos to the author! I did some research in my car a few years ago. Unfortunately, the update packages were properly signed. So I reversed the CAN traffic and replaced the infotainment ECU with my own reimplementation. I published my work and findings on medium thinking that nobody could notice them... I have got a job in the automotive industry instead.
That's pretty cool! I wonder how properly they were really signed - there are _so many_ mistakes even in systems that at least don't use an example key off the Internet.
* Failure to understand the system boundaries, like in the second part of https://github.com/bri3d/simos18_sboot where "secret" data can be recovered by halting the system during a checksum process.
Fundamentally this is of course, a very hard problem, since in the "protect against firmware modification" case, the attacker has physical access. But, compared to the state of the art in mobile devices and game consoles, automotive stuff is still way behind.
The infotainment of the car maker had some oddities:
* the binaries were signed with a certificate emitted by a big certification company, repacking an update package could be challenging
* the system ran a fork of Windows CE and was co-developed with Microsoft
* the SoC documentation was only available under NDA (I was able to find only a one page datasheet)
For those reasons, instead of trying to repack the software for the original ECU, I started to sniff the CAN traffic and analyzing the binaries contained in the software update packages found online. That allowed me to reimplement the communication with other ECUs on a Linux SBC.
Yes, a full replacement absolutely makes sense in this situation! I don't think there are many Windows infotainment units left these days. Analyzing the binaries to figure out the meaning of the CAN traffic is an awesome (and underutilized IMO) technique - I see people sit and stare at CAN dumps in a vacuum a lot when really, whatever checksum or data they're looking for is often right there in the code.
Actually there still are a lot of Windows ECUs in the wild. I analyzed the update packets of newer cars than mine (equipped with Uconnect) finding even sndrec.exe and the default page of IIS!
Fortunately my company produces Linux-based ECUs, and we use Windows only for Autosar stuff.
I just do not like self promotion. My employer does not care of my posts, they were published before signing my current contract. I have stopped writing posts by myself since I have now access to tons of documents and specifications.
I’m not so sure. Hyundai/KIA are entering aggressively the EV market in the west. I have an Ioniq 5 and its software, although not bug-free is incredibly nice to use. The touch screen is very responsive too. I would be shocked if they are not taking software development seriously. The car is basically software on wheels.
12 months is if I get my act together AND they're available. Neither of which are likely!
I'd heard they're almost impossible to get hold of. I'm actually in no rush so I was tempted to wait for the next iteration. I'm all about V2H/X and I'm hoping / wondering if the next version will have that fully enabled or not.
Anyway, take my comment as just a strong compliment for Hyundai. I'm so happy they're around as a strong alternative to the incumbant.
Good luck getting yours. I'd get one just based on its looks, let alone any of the other goodness.
I would love some insight on the internal process. If the place is anything like mine, this would have been the result of years of planning, months of meetings, endless spreadsheet checklists, and committee reviews, with an internal group, but staffed with mostly outsourced programmers.
...and the mechanic subs I follow do roast some mfrs for the info systems.
But it's clear that software is important to cars nowadays. I don't think any mfr fails to grasp that at some level. So presumably some will succeed to some degree; why not Hyundai?
The OP said that Hyundai partnered with nVidia - although I grant that the keyboard making could still have been done by a crappy code factory.
They say that I'm against progress when I say that I'm contrary to all this new things that are adding to cars since they are dangerous. And this article proves my point of view, if who made the software used publicly available encryption keys (one that knows what it's doing knows that to generate them with openssl you take 1 second) how can that software be trusted?
To me self-driving car and all that amount of stuff is simply dangerous, what if someone hacks a million of self-driving cars? What damage can he do? A disaster, worse than hacking a nuclear power plant... and I said coding standard for this kind of vehicle is not minimally comparable to the one of nuclear power plants. We are talking about people that till yesterday did program the offline car radio software that now are programming software that can take the control of the vehicle.
I've been involved with the software side at a few companies producing autonomous vehicles and took a look at SSG-39 to see how much better nuke standards are. They're pretty dang similar to automotive standards (UL4600, SOTIF, etc). The difference is in the regulatory environment and execution. Automotive software is not well-regulated and most OEMs don't think either the software or the people that produce it are worth investing in. I regularly have disagreements with systems folks about whether we should focus on actual sources of quality issues like memory safety or the checklist items that regulators are going to look at like whether the compiler is verified.
Automobile software has good standard for critical parts. The problem is all the software never considered a remote attack, since it was not connected to the internet. The CAN bus was local and the only way to interact was to physically connect to the bus, thus it has no security built in (every device can send any command since every device is trusted). Now to enable all the smart stuff there is a connection between the car bus and the internet, of course indirectly trough the entratainment system, but if for example the entratainment system (for which coding standards are bad) is exploited, you have full access to the bus and thus can send commands to various parts of the car, which is very bad!
That's not how any car I know of works. Infotainment stuff can't go on the CAN bus due to bandwidth limitations. It's usually on its own network. Historically that was MOST, now it's usually Ethernet (or A2B for some specific components). That network may be bridged to other networks like CAN with a locked down gateway module, but frankly even that design is out of style due to security issues.
If you know a manufacturer putting radio devices on the CAN bus, by all means name and shame. I'd be morbidly curious because of how obviously terrible an idea that is.
I worry that embarrassing cases like this are going to cause global corporations to call for the ban of general purpose computers for the public. I mean, look at all they’ve done. We’re at a tipping point with the right to repair movement, but John Deere still hasn’t lost yet.
>If I could figure out all of the security measures of the firmware updates, I could modify an existing system image with my own backdoors, giving me full access to the IVI.
I suppose if you brick your own computer the dealership probably won't realize that you were the one responsible. Just don't say, "yeah I cracked your ivi but after I modified the system image the car wouldn't start anymore...." Still, you might want the full warranty before attempting this step...
Once again, their insecurity is our freedom. One does wonder whether those who were asked to implement this "security" were not happy about doing so, and just decided to make the equivalent of a speed bump instead of a wall.
I remember "carputers" were a somewhat common aftermarket mod in the early 2000s, mostly running a stripped-down 98 or XP on an SSD or memory card. The obvious difference being that you had full control of it from the start. Here's an example of someone doing so:
The "Kia Boys" in Milwaukee stole almost 10,000 cars last year, mostly Kia and Hyundai models lacking immobilizers. That's 1 car theft per 58 city residents, in a single year. This gonzo documentary on the phenomenon's worth watching:
Insane. And the cars in question have no immobilizer, so the process is just finding a 2021 or older Kia/Hyundai of a certain model, one with a physical key instead of a pushbutton. Then you just rip apart the steering column and turn the ignition cylinder. I was confused because a USB cable was involved, but it's apparently only because it slides nicely over the mechanism to help you turn it, as opposed to having to use needlenose pliers or similar.
So there's no "hack" really, just destroying the lock cylinder.
Insane? This is just what stealing a car was like in 1970. If you get the ignition to turn, you've stolen the car. If you're dependent on technology to stop vehicle theft, you've basically already lost. At the end of the criminals will just use a flatbed tow truck to steal the vehicle if it is worth enough.
The "immobilizers" you're referring to are the bane of my existence from my perspective. Instead of getting cheap $5 copies of my keys made, I have to pay something like $120 to Ford to get copies made.
>At the end of the criminals will just use a flatbed tow truck to steal the vehicle if it is worth enough.
Working-class teenagers in Milwaukee do not use flatbed tow trucks to steal cars.
>The "immobilizers" you're referring to are the bane of my existence from my perspective. Instead of getting cheap $5 copies of my keys made, I have to pay something like $120 to Ford to get copies made.
Ask anyone in Milwaukee whose car hasn't been stolen whether they regret having an immobilizer.
Sure, I concur that teens don't use flatbed tow trucks.
But if as a society, the default has become that your car will be stolen then the society has collapsed. It's more like a bunch of competing warlords, with the most prominent one being the local government.
I live in Washington DC. There's so much car-related crime that that's exactly what we expect. Cars are getting harder to hotwire, so now people just carjack to get a car.
We had them that long too, just like card readers, we just didn't have laws requiring them. No Europe wasn't more advanced than Americans (and still aren't) but they did have better laws for the situation. So you can have that win I suppose.
Well is that a bad thing? Seriously, all they do is make extremely expensive to make a copy of the keys if you loose one. If they want to steal your car, they do it anyway, especially with modern cars that are less safe than older one.
There are some things you don't want to run apps on, even if they're sandboxed, which isn't the case here - I would count "computer in my car that can talk to the stuff that makes it go" as one of these things.
And yet most cars run apps. Are you arguing that those not coming from/authorized by the manufacturer are less legitimate, or should not deserve the same treatment?
That seems... incredibly obvious? If there's a software bug that is a safety risk, the regulatory agencies force the manufacturer to implement a recall at their own expense. That wouldn't be the case for third party software.
The garage can charge ridiculous amounts of money for trivial features and upgrades. Although the manufacturer may benefit from an App Store, the garage gets nothing.
Toyota, for example, charges something like $150 for a single satnav maps update, although now CarPlay/Android Auto is a standard feature and has tanked that market.
GM charges for access to key cloning and module programming/pairing. Its not overly expensive (~$40 per module), but super annoying and bug infested (software crashing/failing/constantly updating).
> GM charges for access to key cloning and module programming/pairing
A lot of this is done entirely in security by obscurity so there might not even be any actual cryptography/authentication involved. It's purely a rent-seeking operation.
> downside of Hyundai just making all the info in this blog post public info?
All the CAN protocol data being public means there's no need for expensive documentation, proprietary scan tools or trips to the dealership for key/module programming if it's all been reverse-engineered (a lot of programming and security-related things in cars are more down to obscurity than actual public-key-cryptographic authentication). Hell, it may even remove the need for expensive modules (over 1k bucks for what is essentially a slow microcontroller on a conformal-coated board running shitty firmware) if the docs allow third-parties to reimplement module functionality on the cheap. It would also blow up any future attempts at subscription-based heated seats or A/C if the published docs make bypassing it trivial.
These articles make me think that we need EU regulations (I live in EU) which force automakers to implement standards which allow the owner to replace the entertainment unit in the car. This means cars and entertainment units should be decoupled and standardised. There should be a standardized electrical interface between the car and entertainment unit together with protocol used for communication. Also the physical specs should be standard together with automatic tests used to test that car and multimedia unit works as expected.
I worked on Infotainment for a number of years with an Automotive OEM. I’ve only ever heard insiders use the term IVI, so I’m curious if the author has some exposure to the industry.
As for gaining access to the engineering, I’m interested that it was left in. We logged everything of course, but all of our engineering tools were removed during the build process. Essentially, our engineering mode was baked out.
Hacking on a car is making the CEL turn off by tricking the computer into thinking the senors are correct with a resister you have wired inline in order to keep you $1k beater from `02 on the road. Reverting your car to a state where u can maintain it by keeping up its mechanical worthiness out of sync with the useless sensor after sensor that dies whos sole purpose it to allow idiots to drive cars and not burn them down.
Modern cars are literly just hyper-optimised consumer goods. Hyper optimised for warrenty periods and unskillly supply chains with no quality control or pride. modern cars are literly like going to a LAN party with a different personing bringing boxes that need to interact without failure. not happening. hackable to the hilt because pentesting cant be warrentied out.
Realistically speaking, if you brick the thing you put it back together and take it to the dealership and they'll swap the bricked module. Rinse and repeat if needed.
The car being new and under warranty is a major advantage in this case.
The thing that puts me off ever buying a new car - particularly an EV - is the insistence on loading it down with things like "In Vehicle Infotainment". I don't want that. I want an entirely gadget-free car.
Most development at very large companies happens by hordes of people who call themselves developers but do not care about development and barely understand anything they are doing. Most coding happens by copying and pasting from Stack* and then restarting the application hundreds of times and tinkering with the code mindlessly until it happens to work. Anything besides getting the "happy path" (sic!) to work is a secondary concern.
Add to it that most developers from Asia do not really expect or care about privacy and are super quick to drop any quality standards to meet deadlines. And the management that likewise does not value quality besides things that can be easily seen and typically does not tolerate any delays for anything that is not absolutely necessary.
I worked for a well known, huge Korean company. When I was there I learned they shipped a mass produced device with a telnet server with a simple default password. This wasn't done for any evil purpose -- the development team decided this would improve their ability to debug any production problems they might face. They were not trusting their own code and were looking for a quick and easy solution to deal with inevitable deluge of support tickets.
> most developers from Asia do not really expect or care about privacy and are super quick to drop any quality standards to meet deadlines
You started a hellish, tedious flamewar with this. That's exactly what we're trying to avoid here.
Please edit general putdowns, casual swipes, and flamebait out of your HN comments, and stick to what you can legitimately say from your own experience.
But this is my 22 years of experience in software development working for various companies from all over the world, including a number of companies in Asia.
I have experience working directly for about 30 companies and indirectly for about 50 companies.
I worked for a core security team for one of the largest phone makers from Korea. I don't want to name the company but when I say the actual privacy was not a huge concern this is actually my own experience working with them.
These are facts just like saying most Americans are overweight. You might not like it, but that does not change the fact.
Unless your 22 years of experience included observations of "most developers from Asia", then your comment went far beyond your experience in a way that was guaranteed to be provocative. It's hardly a borderline call to say that that is flamebait.
Flamebait with racial/ethnic/national generalization is just about the biggest provocation there is. It's bog-standard forum moderation to ask users not to do this. Please don't do it again.
> Add to it that most developers from Asia do not really expect or care about privacy and are super quick to drop any quality standards to meet deadlines.
Having lived and worked in South Korea for 4 years, I can't confirm this to be true. My Korean colleages were no less diligent than the ones I now have in Germany, on average. It's true that pressure is high and projects are often run in an air of permanent crisis mode to keep the pressure up, though. If corners get cut the decision usually comes from above.
I do systems engineering on operating systems and HMI for consumer electronics. In Korea it was for smart TVs and industrial equipment control, in Germany for cars.
How is this isolated to Asia? As a developer in the USA, I’ve seen prioritization of deadlines over quality in nearly every org I’ve worked at.
We would probably all love to hand write encryption algorithms for data transfer, or beautiful animations on user interactions, or perfectly graceful fault handling, or 100% test coverage.
The reality is that kind of work takes far longer than most employers are willing to wait. I doubt heavily that it’s the engineers love of coding that is lacking, rather the employers patience.
>We would probably all love to hand write encryption algorithms for data transfer
Yikes. Funny that you mention this, because it’s precisely the kind of thing that one would expect from this sort of “talent”, and not from real engineers.
No. One does not hand-write encryption algorithms, unless they are Daniel Bernstein or similar.
The point is that the statement “Asians value privacy less” doesn’t talk about “Asians” as a racial group, rather, it merely observes that for whatever historical/cultural/sociological reasons, a particular group of people, whom happen to live in a certain area of the globe, might be less predisposed to raise concerns about the privacy implications of their software engineering practices.
Thus the claim that the original comment was racist is false in my opinion.
Another argument against this claim would be the fact that Asian software engineers (here “Asian” is used in the racial sense) raised in Western countries with a strong culture of privacy, seem to be just as privacy conscious as their Caucasian counterparts.
I get your point, and I should have said "you did a prejudice there" instead of "you did a casual racism", because the OP's opinion did not express systemic oppression or a belief that (in this case) 'Asians' are inferior as a whole to other ethnicities.
Thank you for giving me the opportunity to correct myself.
That said, both the original post and several comments, including this comment's parent, are equally prejudicial: attributing qualities to large groups of people based on location or ethnicity and solely relying on anecdotal evidence as opposed to controlled and peer-reviewed study.
Case in point: are there any studies to back up the claim that 'Asian' folks raised in Western countries are more privacy conscious than their 'natively raised' counterparts?
I am not actively thinking about "asians" in a racist way. For me this is a term that describes people coming from certain geographical background which is distinct from say "europeans" or "north americans" or "africans" or "indians" or "latinos".
It is useful term because all these groups come from different environment with a different history and all are distinct enough from other environments that it makes the name useful to describe certain things.
It does not mean that all asians or europeans are the same. Any thinking person understands that talking in general is useful to describe certain things but also understand there is much more subtlety and exceptions under every statement like that.
So we know that people from different backgrounds have different IQs, education or predisposition to different health issues. It is not racist, these are just statements of facts.
Racist is when you try to take these facts and paint them to discriminate or entice discrimination against people just for the fact they share same background.
So when I say that "asians typically care less for privacy" is a statement of fact. You have to contend with the fact that not all people around the world care about privacy the same way and the moment you do so you understand there are some identifiable groups of people that care less.
If I somehow did this to communicate that asians are somehow worth less as people because they care about privacy that would be racist. That would be coming as view of a person from a western country somehow telling that people who do not value same things as me are somehow worth less. Which is a stupid an narrow way of thinking about the world.
When I say "asians value privacy less" and you say "this is prejudice/racism against asians" what it really is is your racist brain. You are saying that your western values are better than values of people coming from some other region.
Part of being racist is being closed to the idea that other peoples have different values that aren't necessarily better or worse.
I forsee an attack on the anecdata element, and the definition of developer, which I will use the most common "linkedin" definition for - which is most people I have encountered support this statement
I think it's less about the nationality of the developers themselves and more about the environment they learned in. Countries that are a common outsourcing destination foster shitty development practices and corner-cutting (it wouldn't be outsourced otherwise), so if most of the IT work in the country is working on such outsourced projects then the majority of the IT talent is likely to pick up & perpetuate bad practices.
I lived in Hong Kong for a short while before that handover to the PRC. Daily, constant references to race were common; stories about people based on race were common; social and economic analysis in casual conversation based on race were ordinary.
I lived in West Africa for 5 years and experienced the same, coming from locals: prejudice against each other's tribal origins and prejudice against foreigners.
But I fail to see how my and your experience are relevant to the way OP expressed himself re: 'Asian' behaviour?
Amazing stuff here folks - this big brained developer just generalized a literal continent of billions of individual people and is the top comment on smart people site hacker news dot com
Please don't react to a bad comment by breaking the site guidelines yourself. It just makes everything worse.
Also, while I have you: can you please not use HN primarily for political/ideological arguments? It looks like that's what your account has been doing, and it's not the intended use of the site.
https://datatracker.ietf.org/doc/draft-gutmann-testkeys/
From the point of view of m.d.s.policy the main idea is to require CAs to reject these keys in certificates for the Web PKI and encourage software to use these keys (rather than their own examples) in sample code. Basically when you build my-cool-website.example and copy paste the private key from an example into your certificate fetching code, the CA should say "Er, no, you need to actually choose your own private key, that's what the word private means" and this should happen when you copy-paste a high-rated Stack Overflow example, the provided sample code from the library you used etc. Ideally these would all be the same keys is the idea.
It would also make sense for tools to care about these example keys e.g. GitHub could flag code that's checked in and has keys which are not these examples since maybe you used real keys by mistake in your GitHub repo, meanwhile your build-for-production CI tools could reject keys which are on the list because that means you forgot to pick actual keys for the real build.