Hacker News new | past | comments | ask | show | jobs | submit login
20GB leak of Intel data: whole Git repositories, dev tools, backdoor mentions (twitter.com)
1062 points by phoe-krk 51 days ago | hide | past | favorite | 444 comments



> If you find password protected zips in the release the password is probably either "Intel123" or "intel123". This was not set by me or my source, this is how it was aquired from Intel.

Can't say I'm surprised, people are lazy.

Another large tech company I used to work for commonly used an only-slightly more complex password. But it was never changed, so people who had left the team still could have access to things if they knew the password. It was an entry point into the system more than the company's Red team.


Password protection may have been used to bypass antivirus and other filters. While you should treat dumps like this with a lot of suspicion, treat password protected zips with a heaping dose of care as they may have been used to evade automated defenses.


Yes - but not for hostile purposes, but because your own company's antivirus won't let you mail an executable to a colleague.


Usually this. Or in my workplace, an image.

Antivirus are some crazy shit that may trigger on any random action and will teach people to follow the most unsafe procedures without questioning, so they can get anything done.


I've heard it put this way: If you force users to trade convenience for security, they will find a way to obtain convenience at the expense of security.


> If you force users to trade convenience for security

I _wish_ it was better security they were making the trade for. It often isn't though. These programs are large, expensive, and don't do much most of the time. I feel there's a perverse incentive for developers to make their AV products as noisy as is possible to justify their own existence.

And yet.. even with full AV rollouts locked down at the highest level, bad actors still get into networks and exploit them. So, to me it feels like our users are trading away their convenience for our misguided CYA policies.


There was that one AV with a JS interpreter running as root

https://news.ycombinator.com/item?id=22544554


The truth is, you don't need much in the way of AV software if you are willing to outright block certain types of files.

In most large corporations you are basically not allowed to send anything that could even potentially hide a virus except for maybe Office files (nobody yet built a compelling alternative to Powerpoint and Excel).

Typical rules already block all executable binaries, scripts and password protected archives (because they could hold binaries or scripts), etc. As a Java developer I have recently discovered my company started blocking *.java files.


My guess/fear is that most AV software gets deployed because some insurance policy requires you to tick that box.


A lot of this stuff (AV software) is getting deployed at all different layers of the environment. Firewalls are getting better at dynamic file analysis and file blocking, the endpoints are loaded with user behavior/analytics, av and dlp tools. AV is so omnipresent because it's in a decent amount netsec appliances these companies stand up


If you make it harder for people to do the right thing than the wrong thing, they will choose the wrong thing.

This has been brought up a million times in the context of DRM, but it is true in the general case as well.


I could be mistaken on this, but wasn't this basically the sales pitch for Spotify? Basically saying "you'll never get rid of piracy, but you can compete with it".


This was the sales pitch for iTunes and the iTunes store:

"We approached it as 'Hey, we all love music.' Talk to the senior guys in the record companies and they all love music, too. … We love music, and there's a problem. And it's not just their problem. Stealing things is everybody's problem. We own a lot of intellectual property, and we don't like when people steal it. So people are stealing stuff and we're optimists. We believe that 80 percent of the people stealing stuff don't want to be; there’s just no legal alternative. So we said, Let's create a legal alternative to this. Everybody wins. Music companies win. The artists win. Apple wins. And the user wins because he gets a better service and doesn't have to be a thief."

https://www.esquire.com/news-politics/a11177/steve-jobs-esqu...

Another point of reference: because they had no legal ground to stand on, HBO targeted Canadian torrenters of Game of Thrones with an e-mail saying, among other things, "It's never been easier to [watch Game of Thrones legally]!"

This was true, it had never been easier. It had also never been harder. For the entire time that Game of Thrones was being aired, the only legal way for Canadians to watch it was to pay about a hundred dollars per month for cable and the cable packages that would give them HBO. You could buy it on iTunes, but only as a season, after the season was over.

So yeah, I kept torrenting it, everyone I know kept torrenting it, and everyone hated (or laughed at, or both) HBO the whole time.


Interesting that it depends so much on region.

Here in the UK, Sky offer a cheap 'over-the-top' streaming alternative to their satellite offerings, [0] so you could watch Game of Thrones for £8/month, provided you didn't mind the inferior video quality.

[0] https://en.wikipedia.org/wiki/Now_TV_(Sky)


They have a "topup" now which allows you to get real, full-fat 1080p.

Woohoo!

I did actually add that to my subscription, and during lockdown have used it to re-watch Game of Thrones :)


I gave that a go but wasn't impressed by the 1080P quality. I suspect they're using a low bitrate.


Most likely. You can get the bitrate to display (when the video controls are up maybe?) if you wanted to take a look.

Between that and whatever magic my OLED tv was doing, it looked pretty good to me.

Just a shame they haven't released it all in 4K/UHD yet...


I doubt they'll offer 4K. They want to push people toward their expensive satellite packages for that.


I meant HBO! I think GoT season 1 is the only season that's had a release at that res so far.

I was really hoping to get an HDR version of the "The long night", to address some of the banding and other visibility problems present in the episode, and maybe see a bit more of what went on. But there isn't one yet. So I watched it with the lights out so that my eyes adjusted :)

But yeah, you're probably right, NowTv has massive potential to undercut their main offering.


This was also a sales pitch for Steam – especially in developing countries where the whole concept of paying for non-physical things was a hard sell.

(Though in this case it wasn't just competition – access to official servers in online games was something that was often not pirateable.)


Not sure about Spotify, but I know gabe newell had famously made basically this argument, in regards to steam's success


It's true, and often it's not laziness - corporate security measures are often focused only on denying access, and they're so overbearing that, were they followed to the letter, they could easily shut the company down. It's through workarounds that actual work gets done.


Sounds like a large organizational incentive intergration failure where subpieces are at odds such that they care more about dodging blame and outside of their domain it isn't their problem. "Not My Fault/Not My Problem" as a toxic approach making balancing decisions worse.


I remember having issues with a corporate email system where base64/uuencoded data would fail to get through with a very rough dependency on size - large files had a smaller chance of getting through but it was clear that there wasn't a hard size limit. Eventually someone twigged that the problem was a "rude word" scanner, and that beyond a certain size you would hit the "scunthorpe" problem, and forbidden words would appear in the ASCII text randomly.


The thing is, usability is security. People will do anything to be able to do their job (because people like being able to, you know, eat and stuff). Things that stop you doing your job are bad for security.

I wish more of the security industry would get their frigging heads around this. PGP did less for messaging security over decades of availability than iMessage and Signal did in a few weeks of availability.


Antiviruses will quarantine compiler output...


This 100%. I recall many a fun night at $BIGCORP burning the midnight oil, receiving the warning emails that my "unauthorised software" had been reported to my manager, and that it had been quarantined away for my own safety and convenience. Given that $BIGCORP was a tech firm my manager would be intensely delighted that they would receive regular midnight notifications that I was doing my job. Whatever that damn thing cost it would have been cheaper to let the malware do its thing.


Windows development seems to be fun as of recently. Didn't touch it for couple of decades.

Sometimes I think that modern Windows is a nice platform already, even comfortable. (Like, you know, C++17 is very unlike C++98.) But then I'm reminded of the necessity to run an antivirus in front of it in a corporate environment.


I intensely dislike corporate "security product" culture. For whatever reason, every IT department thinks that you have to ruin Windows with tons of invasive antivirus and monitoring software. I've seen zero evidence that these performance-killing tools are necessary. It's all theater. Microsoft itself doesn't do this shit to Windows, and neither should anyone else.


We have to have antivirus on our Linux computers for compliance.

Yes such a thing exists... https://www.mcafee.com/enterprise/en-us/products/virusscan-e...


There was a discussion in our IT Security department about how to install McAfee on CoreOS servers. (For the uninitiated, CoreOS is a Linux distribution that comes without a package manager. It's intended as a base to run containers on, so you would deploy all software via container images.)

I remember someone suggesting to put McAfee into a fully isolated container that only exposes the port where it reports compliance, allowing it to scan itself to death all day long.


There are legitimate use cases for anti virus on Linux, for instance when running mail or file servers.


Aren't those scanning for Windows Viruses?


Some can be cross-platform JS exploits.


At one company, Symantec would also quarantine the compiler and build system. It certainly made builds exciting to have the antivirus playing Russian roulette with the entire toolchain.


Every time I went to configure a toolchain on Jetbrains' CLion, Cmake would create some test files and compile them. Windows Defender deleted every file and even the embedded toolchain. Fun :)


Of course many places have replaced dopey AV with creepier advanced tools like ATP or CrowdStrike.


Ugh, welcome to my life.

"You must exclude our program sub directory because temporary files are created containing interpreted code and your antivirus will ether block it outright, or lock the file so long you get application time outs"


Let’s call a spade a spade.

Antivirus software is malware.


In February, I e-mailed a python script to one of our developers to help debug an issue with their SSL configuration.

Two days ago, I needed the script again but couldn't find it. Went to our e-mail thread and it said "the following potentially malicious attachments were blocked", showing mine, but... even from my outgoing mailbox? That seems ridiculous and problematic, considering that it sent fine at the time.

I know that e-mail shouldn't be used as a replacement for Sharepoint or Dropbox or whatever, and I should have a local copy of what I need, but it just seems annoying and arbitrary.

Anyway, I just logged into Outlook Web and downloaded it from the message there. Problem solved.


This has happened to me with gmail. Zipfiles I had sent in the past are no longer allowed to be downloaded from my sent items folder through the standard interface.


If I had to deploy AV for mail, I would absolutely scan outgoing mail as well. Imagine if some compromised mail account in my org sends malware to accounts in other companies. These companies could then sue my company for negligence if they can show that we did not scan our mail for viruses on outbound (which could potentially be done by examining mail headers).

(I am not a lawyer.)


Your company's antivirus, or GMail. A binary? A zip with a binary? Nuh-uh.


To be fair, emailing binaries (apart from known types such as images, PDFs, etc.) is a rare enough use case for legitimate purposes and an easy enough way of spamming malware to clueless random people that it's probably a reasonable default for gmail.

Having an option to allow them might be okay though. (I barely use gmail so I don't know if it has one or not.)


Ah you must be young...


for not using gmail? The hooked me in school


For not sending binaries by email - there is no shame to being young in this case as it means never developing the bad habits.

Before Dropbox and similiar it was far more a norm and various file sharing systems like SharePoint may wind up not actually used. Non-computer technical people often do so in companies all the time and practically use it as an ersatz version control system to the cringe of IT.


This WebRTC p2p file transfer has been a revelation for me. https://news.ycombinator.com/item?id=23023675


We "thankfully" have shared folders we can use to drop stuff to specific users.

But most of our software lives on a RDP server anyways.


He means there used to be a time when people would mail binaries to each other more often, before they got too big and DRM'ed for that.


There was also a time when alt.binaries was a thing (technically not email, but usenet is pretty similar)


I use vmdk’s

Seriously I don’t know how long it’ll last but a zip file into a fat32 disk image in a vmdk got through just fine.

The bonus is that 7zip can extract from vmdk.


We just rename our files with .novirus on the end. I assume the main point is to stop executables from outside running with a click, or internal forwards of the same by compromised users which is why it's so easy to bypass.


Shouldn’t you put it in either eg artifactory or a code repo?


Yes. Whenever I email or transfer a zip via any method really I always put a basic password on it.

I've been bitten way too many times by dumb filters that pick some file out of the zip and declare that it is malicious. I also don't trust messenger apps to not pull my files out and do who knows what with them. A basic password prevents this junk 99% of the time for almost no effort.

It won't stop a determined system from cracking the password. But that isn't what I'm trying to defend against.


Gmail doesn't seem to like archives it can't open :/


Ah, the halcyon days of merely changing the file extension from .exe to .txt...


This brings back happy memories of a college (senior high for the Americans in the audience) computing teacher finding a friend and I had been writing irritating malware instead of doing actual work, and his only comment being “if you’re going to email that to yourself change the extension so it doesn’t get flagged for IT support”.


Gmail won't even let you send a JAR file, or a zip you made out of a project where it happens to be a .jar file somewhere deep in some random subdirectory.


IIRC, You can do it by embedded the content into an Office file, which is a zip file.


I have left Intel couple of years ago, that's exactly what passwords were used for. It was pretty annoying to try to send files and putting them in encrypted archive wast the most convenient method.

It was not just for binaries but for scripts, html, etc.


That's an excellent point I wouldn't have considered. I have no intention of looking at the dump anyway, but thanks for the warning.


I think the proper term is Honeypotting.


Commonly password protected zips are used to bypass security systems that block all zips with exes in them.

I doubt the encryption was believed to be a security barrier.


I was an admin for a medium sized company and handled their websites. Almost all of them (about a dozen or so) were hosted on Go Daddy. Plus they had about two dozen reserved domains they were sitting on like www.yourcompanysucks.com and others.

I left the company 5 years ago. Just checked the login to see if it still worked.

Yeap.

Any disgruntled employee could change the password, lock them out of all of their sites (including several e-commerce sites that amount for a large chunk of revenue) and then if they really wanted to, delete all of them.

I remember talking the main network guy about any backups when a lot of the ransomware stuff was making the rounds. The big, really big stuff on their network (mostly ERP stuff) was backed up in two or three places. Their web stuff? Yeah. . . NOPE.

Pretty scary how lazy people are about stuff like that.


I wonder if a malware should just grep for "pw:" or "password:" and then try the string it finds against anything encrypted. Or forward it to the control center.

Also the contents of files like password[s].txt


I worked for a company that made servers. In the on board management system's source code I remember seeing "base64 encryption". I think they removed it by the time I left, but still.


A company I know insists on rotating passwords fairly often. Everybody just increases the number at the end of their favourite password, i. e. intel1255


I once worked at a place that required passwords to be changed every month and contain at least one upper and lower case letter, digit, and punctuation, and not match any previous password.

So the password for August, 2020 would be “August, 2020”.


This is super common, to the point where Microsoft used a similar password scheme as an example when talking about password spraying attacks at an RSA conference presentation

https://www.zdnet.com/article/microsoft-99-9-of-compromised-...

It's why I'm advocating within my organisation to get rid of password expiration and enforce 2FA for clients, but there's a lot of inertia to push against with some of them. At least uptake of 2FA is consistently increasing.


If you need backup, NIST standards agree with you.

Scheduled password expiration weakens security by encouraging users to make predictable passwords, and by entrenching password resets as a routine and unscrutinized process.


Many DoD websites are the same. It's so annoying. I use a password manager at home but at work I don't have that luxury (installable software is tightly controlled and very limited).


Where I work they use a password filter to stop you from doing that...

But it doesn't stop you from spelling out the numbers instead, plus that makes your PW longer


In my experience this is pretty standard across the industry.


I use the month and year instead


Also, the passwords are listed in docs that appear to be alongside the encrypted files. That's a bit like leaving the keys to your house _on top_ of your front doormat.


It's kinda like hiring a security guard for insurance purposes, even though they have strict instructions to never do anything, under any circumstances, other than call emergency services.


To be fair having someone aware and around to watch and phone emergency services has a use.


It's kinda like hiring a security guard for insurance purposes, even though they have strict instructions to never do anything, under any circumstances, other than call emergency services.

I see you've worked in retail.


The shared stupid passwords like this that I've seen/had to use in my career would utterly shock you. Like hunter2 levels of shock.


  > Like ******* levels of shock.
What do you mean with 7 star levels?


This joke never gets old


The people that get bash.org jokes in contrast... :)


No one who knows what they're doing uses zip passwords as security. The passwords are probably there for other reasons.


Another password is "I accept" (based on the leakers Twitter messages).


at my first job they used a similar password as their go-to "temporary" password for users etc. I found later when I got to work with the users that they rarely changed this password even when "forced" to, and in many cases had it up on post-its next to their monitor.


and in many cases had it up on post-its next to their monitor.

These days a post it is probably the best way to secure your password.

99.9999999% of password hacks come over the wire now, from people in other cities, states, or nations. If someone is in your building, in front of the computer, even without the post-it, you're probably toast.


A post-it is not a good way to secure your office's generic temporary password.


Another large tech company I used to work for commonly used an only-slightly more complex password

I know a brand-name healthcare company that uses Passw0rd for its internal WiFi, which is easily reachable from an interstate rest area.


I knew one company who used the same password for bios as wifi.


Some people/companies think that if you are behind VPN you can use simple and obvious passwords.


At a previous workplace we had a few places in the code which used the word backdoor. It was not an actual backdoor though, but merely a debugging server that could be enabled and allowed you to inspect internal state during runtime. At some point I removed the word backdoor, fearing it would get to a customer or during an audit someone would misunderstand. :|


Once I got a complaint from a security auditor that some code was using MD5. It wasn’t being used for any security purpose, just to check whether an autogenerated file had been manually edited. We decided it was easier to do what they wanted than argue with them, so we replaced it with CRC32C. That would have been faster than MD5, but nobody cares about saving a few milliseconds off reading a configuration file at startup. It would have made the manual edit check somewhat less reliable, but probably not by much in practice. But the security auditor was happy we’d stopped using MD5


You don’t actually need to listen to auditors. People like you (who can’t be bothered to argue because it’s apparently too hard) is the reason that smartass is still selling their services.


You either have way more grit at arguing than most people or you haven't worked at a large and cumbersome organization.

I know most people at those kinds of organizations just don't have the grit to fight every one of those battles all over again, and choose to do the things they can affect with reasonable effort instead.

I'm not saying that grit would be a bad thing to have. I appreciate the people who do it. But you really can't know what kinds of situations the parent commenter was in, and sometimes you can't really expect everyone to want to fight it.


I agree with your sentiment in general, but this is telling a dumbass where to go.

Its not a hard argument to win. Md5 here is fine, its not a security check.


Sometimes the point isn't technical, but social. So MD5 isn't used for security purposes right now. At some point someone will want some hashing function, and they'll probably look at what the code already uses. The last thing you want is someone a bit clueless goi g "it was good enough there, it's good enough here" and using MD5 where they shouldn't. Removing it from a codebase helps with that problem.

The problem here is that people assume they know every possible reason why the auditor might ask for something, when they don't. If the auditor is asking for it, and it costs almost nothing to do, maybe just do it instead of wasting everyone's time by acting like you know the totality on the subject, and everyone will probably go home happier at the end of the day.


Isn't that what code review is for? To me that sounds like arguing against string formatting because someone could think it's ok for SQL queries.

An auditor's job doesn't end at saying what things should be changed, it should include why as well (granted, we don't know the full content of the auditor's report here, maybe they did say why).


Code reviews are good checks. Making it more difficult for dumb ideas to show up in a code review and possibly be missed is also good.

If using md5 had any real benefit I'd say leave it, but what are you gaining?


because CRC is actually worse for checking file content collisions (not that MD5 is perfect either).


> because CRC is actually worse for checking file content collisions

So use SHA-1 or SHA-2 or SHA-3 or if you really hate NIST standards for some reason then CubeHash or Skein or Blake2 or ...


The reason why CRC32C was chosen as a replacement instead of SHA-2 or whatever - what happens if in a few more years, SHA-2 isn’t considered secure any more and some future security audit demands it be changed again? Whereas, a CRC algorithm isn’t usually used for security purposes, so a security audit is far less likely to pay any attention to it. The whole issue started because a security-related technology was used for a non-security purpose.


> what happens if in a few more years, SHA-2 isn’t considered secure any more and some future security audit demands it be changed again

Then change it again? If you use the most recent available NIST standard it should hopefully be a very long time before meaningful (let alone practical) attacks materialize (if ever). If you end up needing to worry about that in a security audit, consider it a badge of success that your software is still in active use after so many years.

Using an insecure hashing algorithm without a clear and direct need is a bad idea. It introduces the potential for future security problems if the function or resultant hash value is ever used in some unforeseen way by someone who doesn't know better or doesn't think to check. Unless the efficiency gains are truly warranted (ex a hash map implementation, high throughput integrity checking, etc) it's just not worth it.

> a security-related technology was used for a non-security purpose

I would suggest treating all integrity checks as security-related by default since they have a tendency to end up being used that way. (Plus crypto libraries are readily available, free, well tested, generally prioritize stability, and are often highly optimized for the intended domain. Why would you want to avoid such code?)


Are you seriously suggesting sha-1 as a good replacement to md5... for security reasons?


Ahh poop, looks like I was out of date. Apparently a practical demonstration of an attack with complexity ~2^60 was recently demonstrated against legacy GPG (the v1.4 defaults) for less than $50k USD. [1] That being said, it looks like it still required ~2 months and ~900 GPUs versus MD5 at 2^18 (less than a second on a single commodity desktop processor).

So yeah, I agree, add SHA-1 to the list of algorithms to reflexively avoid for any and all purposes unless you have a _really_ good reason to use it.

[1] https://www.schneier.com/blog/archives/2020/01/new_sha-1_att...


If someone is dumb enough to add it, someone is dumb enough to let it through code review.


Code reviews miss things.

(1) Code is in part a communication medium. This says "We use MD5"

(2) Code changes. If some sees something cryptohashed, they may use it differently in 5 years.


The reason they ask is that they have to fill a checkbox that says "no MD5" and of course they're don't know that CRC32 is worse

And to be very fair, a lot of security issues would be caught with basic checkbox ticking. Are you using a salted password hashing function instead of storing passwords in plaintext? Are you using a firewall? Do you follow the principles of least privilege?


Except this is not for password hashing.

Why is this so difficult to grasp.


What is this weird incessant need to play devils advocate.

Sometimes people are just right.


Because most times people aren't "just right", they're just unwilling to widen their point of view, and/or they turn the issue into a way to assert their own importance and intellect over someone else at the expense of those they work with.

I don't need some coworker getting into some drawn out battle about how MD5 is fine to use when we can just use SHA (or CRC32C as that person did, which is more obviously non-useful for security contexts) and be done in 30 minutes. The auditor is there to do their job, and if what they request is not extremely invasive or problematic for the project, implementing those suggestions is your job, and arguing over pointless things in your job is not a sign of something I want in a coworker or someone I manage.


> they turn the issue into a way to assert their own importance and intellect over someone else at the expense of those they work with.

This is exactly what the auditor is doing.

How can you not see the irony here?

> I don't need some coworker getting into some drawn out battle

This isn't a drawn out battle. This is a really fast one, md5 is fine here, you didn't check the context of its use, thats fine, whats the next item on your list?

Whats fucking hard about that?

Is this some kind of weird cultural thing with American schooling teaching kids they can't question authority?


> This is exactly what the auditor is doing.

The auditor was asked to do it and is being paid to do it. Presumably, the people arguing are paid to implement the will of those that pay them. At some point people need to stop arguing and do what they're paid to do or quit. Doing this over wanting to use MD5 seems a pretty poor choice of a hill to die on.

> This is a really fast one, md5 is fine here, you didn't check the context of its use, thats fine, whats the next item on your list?

There are items like this all throughout life. Sure, you can be trusted to drive above the speed limit on this road, and maybe the speed limit is set a little low. But we have laws for a reason, and at some point you letting the officials know that the speed is two low and they really don't need to make it that low goes from helpful to annoying everyone around you.

> Whats fucking hard about that?

Indeed, what is so hard about just accepting that while you're technically correct that MD5 isn't a problem, you're making yourself a problem when you fight stupid battles nobody but you cares about, but everyone has to deal with?

> Is this some kind of weird cultural thing with American schooling teaching kids they can't question authority?

Hardly. Pompous blowhards exist in every culture. Also, that's hilarious. Your talking about a culture that rebels against authority just because they think they that's what they're supposed to do, even if it's for stupid reasons and makes no sense. See the tens of millions of us that refuse to wear masks because it "infringes on our freedom".


> do what they're paid to do or quit.

I'm paid to tell idiots where to go. My boss doesn't pay me 6 figures to toe the line and fill in boxes. She pays me to use my judgement to move the company forward. I'm not wasting my time and her money on this sort of garbage and if they can't see the difference between casual use and secure use them we need to rethink our relationship with this company or they need to send us someone new.

> Your talking about a culture that rebels against authority

You just used the line "do what you're told or quit".

The cognitive dissonance here is unreal.


> I'm not wasting my time and her money

I've very specifically couched all my recommendations for this for when it's trivial to do. Arguing about this with someone instead of doing it, when doing it may have some benefits but really only costs a few minutes instead of just doing so is definitely wasting her time and money.

> You just used the line "do what you're told or quit".

I noted what I wished people would do in very specific cases where they're wasting way too much time and effort to win a stupid argument rather than make a small change of dubious, not possibly not zero, positive security impact.

I don't see anything weird about acknowleding some of the extreme traits of the culture I live in while also wishing they would change, at least in specific cases where I think they do more harm than good.

Honestly, I'm confused why you would even make some cognitive leap that since I live in an area with a specific culture I must act in the manner I described that culture, especially when I did it in a denigrating way. I guess you think all Americans must be the same? That doesn't seem a useful way to interact with people.


As a technical choice, that's true. So the argument shouldn't be hard to win, assuming you're dealing with reasonable people, who are also answering to reasonable people. Those people (e.g. the leadership) also need to care enough about that detail to just not dismiss your argument because making the change is not a problem for them. And they need to not be so security-oriented (in a naive way) as to consider a "safer" choice always a better one regardless of whether there's a reasonable argument for it or not.

That's more assumptions than it is sometimes reasonable to make.

"You don’t actually need to listen to auditors" is decidedly not true for a lot of people in a lot of situations, and arguing even for technically valid or reasonable things is an endurance sport in some organizations.

I mean, I even kind of want to agree with heavenlyblue's argument that you should fight that fight for the exact reason they're saying, and can see myself arguing the same thing years ago, but at least in case of some organizations, blaming people for taking skissane's stance would be disproportionate.


Oh sorry, I thought we were discussing working with rational people.

If you're working with irrational people you're going to have to do irrational things, but that's kind of a given isn't it? We don't really need to discuss that.


Not hard to win if everyone is being reasonable. Given an auditor that thinks all uses of MD5 are proscribed, what would you put the odds of them being reasonable at?

ETA: per 'kbenson it's not hard to conceive of a situation where proscribing MD5 is reasonable. Taking 'skissane's account at face value is probably reasonable, but my implicit assumption that the auditor would not explain if pressed isn't being charitable.


indeed

Specially with the audit/pen test theatre where they have to put something in the report, otherwise why are they getting paid £20K for two days work?

So most people choose the past of least resistance, when it doesn't matter much, so that you fight where it does.


Picking your battles is something we all need to learn to do.

I for one like to pick the easy wins, like this.


In a large company you have to choose your battles.


I do it for the sake of educating our management.

For now 10 years, I refuse to acknowledge the finding of the consulting company which flags the password scheme I use (passphrases) because the norm they use (a national one) talks about czps, symbols etc.

I refuse to sign off and note that our company is a scientific one and to the difference of the auditors, we understand math taught to 16 yo children.

This goes to the board who gets back to me, I still refuse on ethical gtounds and we finally pass.

This is sad that some auditors are stupid when some other are fantastic and that you depend on which one you get assigned.

A good read: https://serverfault.com/q/293217/78319


Sometimes customers demand security audits as part of sales contracts. If it is a high enough value deal, the company may decide it is in their business best interest to say yes. In that scenario, not listening to the security auditor is not a viable option. You need to keep them onside to keep the customer onside.

Similarly, sometimes in order to sell products to government agencies you need to get security audits done. In that scenario, you have to listen to the security auditor and keep them onside, because if you don't keep them happy your ability to sell the product to the government is impeded.


I have a feeling that these auditor people just make up bullshit when they can't find something real. The last few we have got have come up with total non issues marked as severe because they are easy to "exploit".

Meanwhile I have been finding and fixing real security issues regularly. To be fair it would be extremely difficult for an external person to find issues in the limited time they have so the audit comes down to someone running through a list of premade checks to see if they find anything.


One thing I learned when I worked in internal IT security when dealing with auditors was that they will boil the ocean to find an issue, so never be perfect and leave a few relatively easy but not obvious to spot issues for them to write up that don't actually affect the security of your environment. If you don't leave them this bait, they will spend weeks to find a trivial issue (like using MD5 to check for config file changes vs password hashing) and turn it into a massive issue they won't budge on.

The other issue is that if you make it seem too easy to answer their questions or provide reports, they will only ask more questions or demand more reports so even if its just dumping a list of users into a CSV file for them to review, make it seem like way more effort than it actually is otherwise you might find you've been forced into a massive amount of busy work while they continue to boil the ocean.


Smart auditors ask for all items at the beginning of the audit. Smart IT people give them all items at the end of the audit. Auditors have only a limited time budget. The later they get answers, the less time for them is left for follow-up questions.


3D chess! I agree sometimes it feels as if the security review questions are just set-ups for follow-ups that they didn’t include in the initial form (for whatever reason)


I've had audits like that, many are just for CYA and I'm often the dev patching obscure (or not so obscure) security issues.

Honestly, I'm quite happy to have an auditor nitpick a few non-issues if the alternative is risking releasing an app that has a basic sql injection attack that wiggled past code review due to code complexity.

I've also had an external audit that found an unreported security issue in a new part of a widely used framework, so there are auditors out there that do a good job of finding legitimate things.


Some years ago I worked in $BIGBANK and auditor from $GOVERMENT told as to change street name property from textfield to dropdown (for all countries) to help them with fraud detection, and remove all diacritic characters from client names their new software don't like them.

I told my manager that they are idiots and I won't listen them, he was like 'OK, as I expected' never done anything about it, next auditors didn't mentioned it.


This makes me wonder about the reliability of address verification technology.

There are plenty of addresses where the official version in databases is slightly off from what people actually write on their mail. If I got a credit card transaction with the "official" version, that would be a significant fraud signal, that they were sourcing bogus data from somewhere.


So much this. My company just got done shelling out a ton of money for some asshat to tell me that we can't use http on a dev server. <head smashes through desk>


I mean, I mandate https in dev, but it sure isn't for security. It's so that auth works in dev and no changes are required to push prod


or someone doesn't accidentally just entirely overlook it and wind up with just http in prod.


I actually think that's valid. Sure, http on a dev machine isn't a security risk. But there is a tail risk that it ends up somewhere on a system that sends data between machines. Also, using http on dev and https on prod can lead to unexpected bugs. Banning http is not unreasonable.

Same with the md5 complaint. That use of md5 wasn't a problem but there's a perfectly fine alternative and if you can ensure by automated tests that md5 is used nowhere, you also can guarantee that it's never used in a security relevant context.


> and if you can ensure by automated tests that md5 is used nowhere

You can automatically check for the string "md5" in identifiers, but you can't reliably automatically check for implementations of the MD5 algorithm. All it takes is for someone to copy-paste an implementation of MD5 and rename it to "MyChecksumAlgorithm" and suddenly very few (if any) security scanning tools are going to be smart enough to find it.

(Foolproof detection of what algorithms a program contains is equivalent to the halting problem and hence undecidable, although as with every other undecidable problem, there can exist fallible algorithms capable of solving some instances but not others.)


It's worse when the asshat convinces your manager that every internal site, whether dev or not needs https. Certs everywhere. Our team spends a decent % of our time generating and managing certs...


That’s me. I’m that asshat. It’s called defense in depth. I recommend automating certificate issuance and renewal. It’s totally worth it.


Book or tutorial recommendations please.



How could that work? For security, an internal site lacks a connection to the internet.


Are you talking about a fully internal site, with not even indirect Internet access? For those kinds of airgapped applications, you should maintain your own CA infrastructure, and update all clients/browsers to trust its certificates.

For the more common scenario of internal sites/services which are not accessible from the public Internet, but not fully isolated from it either:

You don't need the internal site exposed to the Internet. If you use DNS-01 ACME challenge, you just need to be able to inject TXT records into your DNS. Some DNS providers have a REST API which can make this easier.

Another option – to use HTTP-01 ACME challenge, you do need the internal host name to be publicly accessible over HTTP, but that doesn't mean the real internal service has to be. You could simply have your load balancer/DNS set up so external traffic to STAR.internal.example.com:80 gets sent to certservice.example.com which serves up the HTTP-01 challenge for that name. Whereas, internal users going to STAR.internal.mycompany.com talk to the real internal service. (There are various ways to implement this – split horizon DNS, some places have separate external and internal load balancers that can be configured differently, etc)

Yet another option is to use ACME with wildcard certs (which needs DNS-01 challenge). Get a cert via ACME for STAR.internal.medallia.com and then all internal services use that. That is potentially less secure, in that lots of internal services may all end up using the same private key. One approach is that the public wildcard cert is on a load balancer, and then that load balancer talks to internal services – end-to-end TLS can be provided by an internal CA, and you have to put the internal CA cert in the trust store of your various components, but at least you don't have the added hassle of having to put it in your internal user's browser/OS trust stores.

(In above, for STAR read an asterisk – HN wants to interpret asterisks as formatting and I don't know how to escape them.)


Internal sites? Set up your own CA infrastructure.


I rather spend my limited time working on other security issues.


Or (gasp) shippable features.


Which means if someone gets access to the internal network, they can read all traffic. And even dev systems can send confidential data. With letsencrypt and easy to generate certificates, https everywhere is very reasonable.


Not quite so crazy now that everyone's working from home, right? Unless you also use a VPN?


Even with VPN. I don't want any person on the vpn to be potentially able to read traffic between internal services. I think that would fail many audits.


It does though. There's no excuse for unencrypted traffic. Google doesn't have some VPN with squishy unencrypted traffic inside. Everything is just HTTPS. If they can do it, so can you. It's just not that hard to manage a PKI.


Does your organization disable the "Non-secure" prompt in the browser as well? If not, I'd say that it does seem like a security risk to train your users to ignore browser warnings like that.


I am confused. Isn't that easily automated?


It's not easily automated. Somehow, you have to safely get a certificate across the air gap to the internal network.

So I guess an internet-connected system grabs the certificates, then they get burned to DVD-R, then... a robot moves the DVD-R to the internal network? It's not easy. It's all much worse if the networks aren't physically adjacent. One could be behind a bunch of armed guards and interlocking doors.


An airgapped network can include its own internal CA, and all the airgapped clients can have that internal CA's certificate injected into their trust stores, and all the services on the airgapped network can automatically request certificates from the internal CA – which can even be done using the same protocol which Let's Encrypt uses, ACME, just running it over a private airgapped network instead of over the public Internet.


We have a ton of internal stuff, most of it doesn’t even have external DNS. We use long lived certs signed with our own CA. we’d prefer using and automated solution, using a “real” CA, but non seems to be available.


I was on the receiving end of a security audit issue. I closed the bug s won't fix, my lead approved it, but when the team who paid the security auditor found out they demanded I fix it. I had to argue with it, infosec, and the auditor. Nobody really cares what I did, they just wanted to follow the rules. After a month of weekly hour long meetings I relented and changed the code.

You're often not arguing with the auditor, you're arguing with the person who paid the security auditor in the first place who is likely not even technical. That's a battle toy will likely never win.


To add to this, often the primary goal of the person who paid the security auditor is not to actually increase security. It is to get to claim that they did their due diligence when something does happen. Any arguments with the auditor, no matter how well founded, will weaken that claim.


Depends I suppose. When your CFO tells you to fix it so you're in compliance, your opinion doesn't matter a whole lot. Never mind if it is a government auditor or their fun social counterpart the site visitor.

I once got cited for having too many off-site backups. They were all physically secure (fire proof safes or bank lock box), but the site visitor thought onsite was fine for a research program. The site visitor's home site lost all its data in a flood.


You don’t actually need to listen to auditors.

At my company, that's a one-way ticket to the unemployment line.


Sometimes an inexperienced auditor will show a minor finding that is a sign of a bigger issue. For example, if Windows is in FIPS mode, some MD5 functions will be disabled.

If you need to be operating in FIPS 140 mode, that may be a problem of some consequence.


In some companies, you do. Medical certifications require regular audits, and failing an audit is _not_ good.


And it's good! Code Reviews can't surface all issues. Independent audits should be welcomed by developers to find more bugs and potential security risks (even though I'm a bigger fan of penetration tests instead of audits).


When you're trying to keep a company of 100,000 employees secure, you can't have an approach that says "let's figure out where we need to remove MD5 and remove it." You have to set an easy to understand, consistent guideline -- "tear out the MD5" -- so that there won't be any doubt as to whether it's done, some teams won't complain that they shouldn't have to change it because some other team didn't have to change it, etc. And then every time they do a security audit the same thing will come up and cause more pointless discussion.

In isolation it looks like wasted work but in terms of organizational behavior it is actually the easiest way.


Happened to me as well. Was writing an authentication service. We thought we were paying for an actual security audit, turns out we payed for a simple word scanning of our codebase. The review didn't find any of the canaries we left in the codebase, and we could never argue back with them. Big waste of money.


Huh. I'm thinking it'd be fun to write code with know issues (with varying degrees of obviousness) and hire a bunch of different "auditing companies" to see which ones pick up on that.

Publish the result for market comparison's sake.

Then again, that requires plenty of money and I can't see how to monetize that in any way.


Not only that but MD5 still doesn't have an effective preimage attack, so it is still good enough for things like hashing passwords or to check is someone else didn't tamper with your files.

Still, when it comes to security:

- MD5 is actually too fast for hashing passwords, but there is still no better way than bruteforce if you want to crack md5-hashed-salted passwords.

- Even if there is no effective preimage attack now, it is still not a good idea to use an algorithm with known weaknesses, especially if something better is available.

What MD5 is useless for is digital signature. Anyone can produce two different documents with the same MD5.


Funny: we had the exact same thing from a pen tester. I think we replaced it with SHA256, though.


this is hilarious


Makes perfect sense.

Defense in depth, if you can grep the source code and not find any references to md5, then you have quickly verified that the code probably doesn't use md5.

This you can easily verify again later, you can even make a test for it :)

Even if in practice this had no impact, removing md5 usage, will make it harder to accidentally introduce it in the future.


The issue is not md5. The issue one wants to detect is weak hash functions used in cases where they're not appropriate. The fact that crc32 passed means that any obscure hash function would have passed too, even if it had been used in a context were it isn't appropriate.

All it means that the audit is superficial and doesn't catch the error category, just famous examples within that category. That kind of superficial sanning may be worth something when unleashed on security-naive developers or even as optional input for more experienced ones. But "hard compliance rules" and "superficial scans" combine to create a lot of busywork which makes people less motivated to work with auditors instead of against them.


Both perspectives are somewhat correct, I feel; the requirement to remove any usage of md5 is beneficial, but the fact that crc32 passed means the audit shows the motivation was misplaced.

The resulting situation might of course not be a net benefit though :/


> But "hard compliance rules" and "superficial scans" combine to create a lot of busywork which makes people less motivated to work with auditors instead of against them.

Absolutely :)

The fact is that if you have experienced engineers a security audit is rarely able to find anything. You would basically have to do code reviews, and this is hard / expensive, and even then rarely fruitful.

So, superficial scans, hardening, checking for obvious mistakes is really all you can do. Making hard rules is unproductive, but then again, migrating from md5 to crc32 hopefully isn't very expensive.

IMO, crc32 is a better choice for testing for changes, and has the benefit of removing any doubt that the hash has any security properties.


Next up: Replace MD5 with BASE64+ROT13. Significantly worse functionality AND performance, but sounds more secure (to a layman) and doesn't trigger the "MD5" alert...


You joke, but an ex-security guy at my company literally told me “this file can’t be in plain text on disk. Base64 encode it”


Base64 encoding does protect somewhat against "looking over your shoulder" attacks

(Unless the person looking over your shoulder has a really good memory and can remember the Base64, or decode it in their head. Or they have a camera.)


Helps against attackers grepping the whole disk (or any folder named "conf" or similar) for "username", "user", "password", "pass", "key" and friends.

It's game over anyway if someone has a shell on your server but at least it complicates their life a bit.


But way more people would use md5 for password hashing than crc32. Of course someone could circumvent these tests, but the risk of someone copying an old tutorial where md5 is used for password hashing can be mitigated.


    // We use MD5 to check if config files are changed. This is not used anywhere else.
    typedef DigestMD5 ConfigFileHasher;


Until someone repurposes that thing to do something that is security-sensitive and forgets to remove the comment, misleading the next auditors.

I always assume that people from the future who are going to touch my code are really dumb people, so I try to have as few traps as possible for them.


i know for a fact the person who's gonna touch my code in the future is really dumb, because it's me


yeah I can see that, what if someone ends up being smart and re-using the verification procedure for a file that does have security impact, DRY right?


I've seen similar rigidity from security audits. Stuff like "version 10.5.2 (released last week) of this software introduced a security bug that was fixed in 11.0 (released today), we need you to update from 10.5.1 (released last week + 1 day) to 11.0 now because our audit tool says so".


Ah yes and also the vendor helpfully changed the API and did a complete rewrite in v11.0. Think about all the neat new things you will get to learn!


It seems like a thin line between a debugging feature and a backdoor; "merely a debugging server that could be enabled and allowed you to inspect internal state during runtime" seems like a backdoor to me, doubly so if it's network-accessible. If Intel has, say, an undocumented way to trigger a debug mode that lets you read memory and bypass restrictions (ex. read kernel memory from user mode, or read SGX memory), is that not a backdoor? Or is the name based on intent?


I think the difference is whether it's something that's always enabled. You could presumably make it available or not at compile time, so the software shipped to a customer wouldn't have it, but maybe if they were having issues, you could ship them a version with the debug server with their permission.


I can agree with that with the caveat that "enabled" has to be at either something that only the user can do. If it requires that the customer intentionally run a debug build, that's fine; if it can be toggled on without their knowledge, then it's a problem.


It was disabled by default, and could only be enabled using environment variables. Even when enabled, the whole thing ran in Docker and the socket was bound to loopback, so you could only connect to it from within the container.

When the intention is a debugging server, making it exposed to the world is a mistake and a security vulnerability. At that point it is effectively a backdoor, but the difference between a high level vulnerability such as this and a backdoor is developer intent.


That doesn't sound very safe.


What sounds unsafe about having a locally bound port inside a container that only binds with an env variable getting set?


For example that someone finds out about that backdoor and activate it to spy on users. Forwarding a port in Docker is not magic…


Sure, it's simple. But you would have to be able to modify the container settings anyway. For all practical uses, and certainly in my case, you could just make it run a different image at that point. Or copy another executable into the container and run it. You're already privileged. Requiring you to be privileged to access the debug server means it's secure.


Until things around change and what was previously "a secure backdoor" becomes a "less secure backdoor". ;-)

One can read every second week about cases where some backdoor that was meant to be used "only for debugging" landed in the end product and became a security problem.

Actually I usually suspect malice when something like that is found once again, as "who the hell could be so stupid to deliver a product with a glaring backdoor". But maybe there is something to Hanlon's razor… :-D


If you're already running another process in the container, you could do whatever you want anyway.


Was it a backdoor, or a hidden door, or .. a utility panel?

The difference, in my opinion, is in the documentation and frequency of use. Is it overt? Does the customer really know its there, and what its for?

Perfectly fine to have an access panel that gives you access to the buss .. if the pilot knows you're doing it.

But if its some random entrance in the back of an alley, only 2 or 3 users in the universe know what it is and how to use it ..


Imagine the reaction if the same thing was coming from China. Nobody would ask the question.


Nationality has nothing to do with it. I also don't trust Americans with such devices.


I'm talking about the general sentiment. You can see this on every* site, HN included. The litmus paper is that even pointing out something objectively true will get criticism (downvotes) rather than critical thinking. In the current atmosphere nobody asks the question when it comes to China/Russia/NK/Iran but will when it comes to the US despite the known history of hacking/spying on everyone else.

*Recently a reputable tech site wrote an article introducing DJI (ostensibly a company needing no introduction) as "Chinese-made drone app in Google Play spooks security researchers". One day later the same author wrote an article "Hackers actively exploit high-severity networking vulnerabilities" when referring to Cisco and F5. The difference in approach is quite staggering especially considering that Cisco is known to have been involved, even unwittingly, in the NSA exploits leaked in the past.

This highlights the sentiment mentioned above: people ask the question only when they feel comfortable that the answer reinforces their opinion.


A manufacturer wanted to upgrade one of their equipment lines to be more modern. The developers of the original product, both hardware and software, were no longer with the company.

Since they just wanted to add some new features on top and present a better rack-based interface to the user, they decided to build a bigger box, put one of the old devices inside the box, then put a modern PC in there, and just link the two devices together with ethernet through an internal hub also connected to the backpanel port and call it a day.

The problem is, if you do an update, you need both the "front end" and the "back end" to coordinate their reboot. The vendor decided to fix this by adding a simple URL to the "backend" named: /backdoor/<product>Reboot?UUID=<fixed uuid>

Their sales team was not happy when I showed them an automated tool in a few lines of ruby that scans the network for backend devices and then just constantly reboots them.

They still sell this product today. We did not buy one.


Reminds me a little of a place I worked.

They sold very expensive devices that were actually an off-the-shelf 1U PC with custom software (which provided the real value). The problem — and this dates it — was that the PCs had a game port¹, which gave away that this custom hardware was really just a regular consumer PC. So they had some fancy plastic panels made to clip on the front and hide the game port.

¹ https://en.wikipedia.org/wiki/Game_port


I remember early in my career I came across a Unisys “mainframe”, which was literally a Dell box with a custom bezel, clustered with a few other nodes with a Netgear switch.


Many non-IBM mainframe vendors switched to software emulation on more mainstream platforms-nowadays mainly Linux or Windows on x86, but in the past SPARC and Itanium were also common choices. What you saw may have been an instance of that. A software emulator can often run legacy mainframe applications much faster than the hardware they were originally written for did.

(With Unisys specifically, at one point they still made physical CPUs for high end models, but low end models were software emulation on x86; I’m not sure what they are doing right now.)


I don't know the details (~20 years ago), but pretty sure you hit the nail on the head. I think one of the boxes I saw were a hybrid -- Xeons with some sort of custom memory controller.

It was my first exposure to this sort of thing, and I was taken aback by the costs of this stuff, which made the Sun gear I worked with look extremely cheap :)


> I was taken aback by the costs of this stuff, which made the Sun gear I worked with look extremely cheap :)

Given the shrinking market share of mainframes, the only way for vendors to continue to make money is to increase prices on those customers who remain – which, of course, gives them greater encouragement to migrate away, but for some customers the migration costs are going to be so high that it is still cheaper to pay megabucks to the mainframe vendor than do that migration. With emulated systems like the ones you saw, the high costs are not really for the hardware, they are for the mainframe emulation software, mainframe operating system, etc, but it is all sold together as a package.

At least IBM mainframes have a big enough history of popularity, that there are a lot of tools out there (and entire consulting businesses) to assist with porting IBM mainframe applications to more mainstream platforms. For the remaining non-IBM mainframe platforms (Unisys, Bull, Fujitsu, etc), a lot less tools and skilled warm bodies are available, which I imagine could make these platforms more expensive to migrate away from than IBM's.


This can't be real... are you serious? It sounds like one of those silly buisness parabels!


Even if this poster made it up, I'm certain it is also true at least once over, having remediated a near-identical problem from one of my employers' products at one point, and talked developers out of implementing it at least once at a different employer.


The older I get, the less I care if individual stories like this are true. The fact that they could be is concerning enough :) And they are educational nonetheless.


A good perspective! People find fiction novels to be enriching and filled with learning despite the fact they are just entertaining lies.


It sounds like Dell’s iDRAC somehow. (Not that it is, but iDRAC had me scared more often than not)


The time iDrac annoyed me the most is when I bricked a server trying to update it.

I made the terrible mistake of jumping too far between versions and the update broke iDrac and thus the server. There was no warning on Dell's website nor any when I applied the update. I only found out what happened after some googling where I found the upgrade path I should have taken.

This is just terrible quality control and software engineering.


At my previous employer our code was littered with references to a backdoor. It was a channel for tools running in guest operating systems to talk to the host hypervisor through a magic I/O port.

It's even openly called "backdoor" in open source code directly related to it: https://github.com/vmware/open-vm-tools/blob/master/open-vm-...


More reasonable VMMs use the word "hypercall" for these paravirtualized interfaces


We use the term “manhole” for those sorts of things


readers may tangentially enjoy yosefk's "cardinal programming jokes" -- the first one featuring a manhole: https://yosefk.com/blog/the-cardinal-programming-jokes.html

> I must warn you about those jokes. Firstly, they are translated from Russian and Hebrew by yours truly, which may cause them to lose some of their charm. Secondly, I'm not sure they came with that much charm to begin with, because my taste in jokes (or otherwise) can be politely characterized as "lowbrow". In particular, all 3 jokes are based on the sewer/plumber metaphor. I didn't consciously collect them based on this criterion, it just turns out that I can't think of a better metaphor for programming.


[flagged]


As in a manhole cover in a street for maintenance.


[flagged]


> Seems like an outdated term. Downvotes accepted.

Manhole is, indeed, an outdated term. Generally the preferred term is "Maintenance Hole". Still abbreviated MH, and people in the field use all three interchangeably (much like metric/imperial).

Source: I work with storm/sanitary/electrical maintenance holes.


This reads like the people who want to use "womxn" because the normal version is a superstring of "men".


Fine personhole it is.


"Maintenance Hole" actually, which is better because it's both more descriptive and not gendered.


Until someone starts using the hole for a purpose that's not maintenance and we start arguing again :).


Or for something that is gendered.


But what if manhole is just mankind hole? (It probably isn't, I didn't look it up). Man doesn't always mean male, or does it?


> Man doesn't always mean male, or does it?

Not necessarily, but see: https://en.wikipedia.org/wiki/Gender_neutrality_in_English#D...

The link is about the debate as it is, but I would also encourage the use of good faith in interpreting any speaker: that is, assuming a person referring to "mankind" likely means all humans without exclusion based on gender or sex, and requiring some other material evidence before presuming bias.

I also wonder what these discussions are like in languages where most nouns are gendered, e.g., in French.


No clue about French but in German they started to use both versions at the same time glued together in made-up "special" forms. It's like using "he/she" for every noun. This makes texts completely unreadable and you need even browser extensions[1] to not go crazy with all that gendered BS language!

OK, I exaggerate, there are still people that don't try to be "politically correct" and still use proper language, and know that there is such a thing called "Generisches Maskulinum (English: generic masculine)"[2]. But in more "official" writings or in the media the brain dead double-forms are used up until the point you can't read such texts any more: Those double-forms (which are not correct German) cause constant knots in the head when trying to read a text that was fucked up this way.

(Sorry for the strong words but one just can't formulate it differently. As the existence of that browser extensions shows clearly I'm not alone when it comes to going mad about that rape of language. Also often whole comment sections don't discuss a topic at hand but instead most people complain about the usage of broken "gendered" pseudo-politically-correct BS language. That noun-gendering is like a disease!)

[1] https://addons.mozilla.org/en-US/firefox/addon/binnen-i-be-g... [2] https://de.wikipedia.org/wiki/Generisches_Maskulinum


Thanks for sharing. As a German learner this is quite fascinating to know.


Believe it or not, we introduced a variant of bash brace expansion (except with implicit braces and dots instead of commas) in our grammar, named it “écriture inclusive”, and called it a day.

The way it kicks words previously loaded with neutrality in the curb but happened to have the same spelling as the gendered one, and entrenches a two-gender paradigm boggles the mind as to how it flies in the face of any form of inclusivity.

That and I still don’t know how to read “le.a fermi.er.ère” aloud. It’s just as ridiculous as “cédérom” because Astérix puts up a show at standing against the invader.


Viewpoints can be encoded in language https://en.wikipedia.org/wiki/Male_as_norm


> In practice, grammatical gender exhibits a systematic structural bias that has made masculine forms the default for generic, non-gender-specific contexts.

many instances of this are simply an artifact of 'man' previously being an un-gendered term. but that fact is much harder to build group cohesion around than grievance.


Perchildhole.


Pick your battles. This isn’t a hill worth dying on (or even a hill worth getting slightly bruised on).


I have learned that flat out telling people that a hill isn't worth dying on tends to cause a bunch of corpses to collect up - if you don't want a molehill covered in bodies you need to persuade them to go die somewhere else.


Yeah agree. And I think we could agree replying "Ew" and loosing a little bit of HN karma does not constitute more than bruising.

EDIT: didn't see the "or even" there. Disagree. I think the analogy can be drawn out a bit, so I'll say that a bruise can heal pretty quick, and one would adapt better to climbing "hills" if they exercised regularly. Plus maybe smaller hills should be climbed too.


I'm really not at all interested in people explaining to me how finding mentions of back doors in technology used in millions of computers is probably OK because it may mean something else.

Given US security apparatus clearly values and desire these back doors and have the necessary power to coerce companies to making them, generalizing the use of "back door" as a term for debugging or w/e seems almost expected.

Even if they are for debugging "oops it's on in production!" is a great cover because none of these companies will EVER admit back doors were required by the government.


Frankly I don't think Intel's track record affords them the privilege of having good faith be assumed with something like this.


Intel employes 100,000 people, and most of them aren't even aware of most of Intel's transgressions, let alone approve of them.


It’s akin to defending Nazis because there were some Nazis who were forced to be Nazis because they couldn’t find a better job.


Intel doesn't have a track record of shipping back doors, or even "bad faith" software really.


Isn't their whole management engine essentially one big (poorly secured) backdoor?


It's essentially secured through obscurity. which I'm sure with this leak will lead to several CVEs over time...


I worked at a place where IT had an admin user on every machine named "Backdoor". I opened a ticket when I noticed it, which was promptly closed explaining that it was normal.

The same place had a boot script on every computer that wrote to a network-mounted file. Everyone had read permissions to it (and probably write, but I didn't test) and the file contained user names, machine names, and date-times of every login after boot for everyone on the domain going back 5 years. I opened a ticket for that, which was never addressed.


You could've probably reported this. Logging login times of everyone for all employees to see likely violates employees' privacy.


indeed it literally was the author's suggestion to search for the word 'backdoor':

>This code, to us, appears to involve the handling of memory error detection and correction rather than a "backdoor" in the security sense. The IOH SR 17 probably refers to scratchpad register 17 in the I/O hub, part of Intel's chipsets, that is used by firmware code.

https://news.ycombinator.com/item?id=24084977


Thats nice of you but Intel's hardware has actual known backdoors.


Yeah, I think that this is likely the case here from the screenshot.


Judging from the current, in all likelihood it is the opcode that APEI (a part of ACPI) tables write to port 0xB2 in order to invoke firmware services that run in system management mode.


Comment, not current. :)


>merely a debugging server that could be enabled and allowed you to inspect internal state during runtime

When we talk about CPU it's bad enough. Think that your program has an input and output streams where most of the app data goes through and I can attach debugger and listen on the data.

I would not be very happy about it and would still consider it backdoor.


But do we now if that part ever ended up in any CPU sold by Intel instead of e.g. engineering samples ?


Someone have a mirror? Seems the actual files are here:

https://t.me/exconfidential/590

Edit: files are here

https://mega.nz/folder/CV91XLBZ#CPSDW-8EWetV7hGhgGd8GQ

or

magnet:?xt=urn:btih:38f947ceadf06e6d3ffc2b37b807d7ef80b57f21


The countries of origin of the peers downloading that torrent is pretty cool to see. A fairly broad cross-section of the world.


Not reliable. Most people torrenting this are hopefully using a vpn.


so.. I shouldn't have clicked that link on my office network?


All good, just make sure to restart your computer at the next available opportunity


How would that help?


Often a boot cycle gives the rootkit a chance to hook boot code to bootstrap into hypervisor.


Or a SeedBox.


Why?


Because until this thing gets diffused and dissected by everyone and their mothers, the law is likely to view it as publication of confidential trade secrets, and people who can be confirmed to be spreading such things can get federal time, e.g. [1] for example. Using a VPN is the barest of mechanisms to try to obscure your identity to avoid this sort of punishment.

[1] https://www.wsj.com/articles/SB10001424052970204409004577158...


I think there's a big difference between selling chemical secrets to a hostile government and this torrent. Namely, that no one is selling this information, it's available to anyone who can grab a magnet file.


Here is the real thing: are you confident enough in your statement to argue that way when confronted by your government (or whatever is the concerned body here)? If yes, then feel free to do whatever you want with your free time and bandwidth, but otherwise you're better to stay as far as possible from these data.


[flagged]


I notice that you have no one in your circle of acquaintances who has illegally downloaded movies about torrents and got caught. I don't know how it is in other countries, but here in Germany friendly people ring your doorbell and take everything that is connected to electricity :). And if there is any data in there that is very damaging to Intel, then I think they will take the trouble to look for these people (at least in certain countries)


i'm sorry you live in a hellhole country and your friends don't understand how bittorrent works. maybe one day you can immigrate to a second-world country and grow some cojones, but until then you should continue living in fear and scaring your peers from downloading leaks early when there aren't fed trackers.


Right but if you just download without seeding, no crime is being committed, yes?

So seems like the barest you can do is "disable seeding", not "use a VPN".


IANAL and you should probably contact yours about such things but a straightforward reading suggests that because you knew you were downloading something likely illegally gotten, you are in fact on the hook for downloading it.

    “Misappropriation” means: 

      (i) acquisition of a trade secret of another by a person who knows or has
        reason to know that the trade secret was acquired by improper means; or
      (ii) disclosure or use of a trade secret of another without express or
        implied consent by a person who 
        (A) used improper means to acquire knowledge of the trade secret; or 
        (B) at the time of disclosure or use knew or had reason to know that
          his knowledge of the trade secret was 
          (I) derived from or through a person who has utilized improper means 
            to acquire it; 
          (II) acquired under circumstances giving rise to a duty to maintain
            its secrecy or limit its use; or 
          (III) derived from or through a person who owed a duty to the person
            seeking relief to maintain its secrecy or limit its use; or 
        (C) before a material change of his position, knew or had reason to
          know that it was a trade secret and that knowledge of it had been
          acquired by accident or mistake.


Ahem. Not seeding is a crime.


Depends heavily on the jurisdiction, I am afraid. This exact case was used as a precedent where I'm from (Czech Republic) that no, merely downloading over BitTorrent still constitutes "sharing copyrighted material".


Presumably that was because BitTorrent sends data even before receiving 100% of it? But I assume that downloading these files would not be allowed in this case anyway as per Zákon č. 121/2000 Sb. §29 (2) since this is not a published work.


The only place I know where that would be the case is Switzerland, there downloading copyrighted material isn't illegal (and companies aren't allowed to track IPs of people downloading files via torrent), but sharing is. But in the context of a data leak of confidential trade secrets, that's likely to be a completely different situation.


Torrent is not the most private way of downloading things, because if you share your already downloaded binary you are posting your ip in a tracker as a leecher or seeder. You actually can see live what torrents (at least the most popular) are you downloading[1], the site is only tracking the most popular hashes, but is easy to some entity track this intel hash specifically

[1] https://iknowwhatyoudownload.com/en/peer/


The t.me page is a Telegram comment containing a mega.co.nz download link

I think the tg:// link is just the site trying to open up in the Telegram app.


I'd assume spreading this is not legal?


I’d assume it isn’t and just not talk about it loudly.


spreading this is copyright infringement. Intel has to sue you for copyright infringement in court.


I think we will see new "Edward Snowden" soon !... Cool


"Invalid magnet URI" from rtorrent


Works fine in Transmission


utorrent works.


utorrent works.


You can't download from mega.nz unless you have their "downloader" app or an account, or if you have Firefox or Safari. It's useless.

The torrent works.


A web app that only works in non-Chromium browsers isn't useless.


I've downloaded stuff from MEGA in the last week on a Chromium based browser, so I'm not sure what the problem is supposed to be here.


I think johnklos meant it only works in Chrome and not in Firefox or Safari. You can download the whole archive as zip with Chrome - no problem. Firefox, on the other hand, doesn't allow you to store that much data locally in the browser, so it doesn't work out of the box. You can download the two top level directories separately though and this works even in Firefox.


Golly, it doesn't work in Chrome? What's the technical limitation here? Or did they just choose not to support it?

Asking because as someone who uses FF as their daily driver and is surprised something is supported in it that isn't in Chrome...


> firefox works

> it's useless

does not compute


Or if you have OpenSSL and curl...

(At least the last time I had to download from MEGA, I RE'd what it does and it was somewhat clever - AES128 in counter mode, key is in the hash part of the URL.)


I thought Mega didn't work in Safari, because it wouldn't have enough cache or whatever in-memory thing it does?


JDownloader 2 works fine and goes around their limitations to boot.


It's the other way around re ff/chrome


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: