Can't say I'm surprised, people are lazy.
Another large tech company I used to work for commonly used an only-slightly more complex password. But it was never changed, so people who had left the team still could have access to things if they knew the password. It was an entry point into the system more than the company's Red team.
Antivirus are some crazy shit that may trigger on any random action and will teach people to follow the most unsafe procedures without questioning, so they can get anything done.
I _wish_ it was better security they were making the trade for. It often isn't though. These programs are large, expensive, and don't do much most of the time. I feel there's a perverse incentive for developers to make their AV products as noisy as is possible to justify their own existence.
And yet.. even with full AV rollouts locked down at the highest level, bad actors still get into networks and exploit them. So, to me it feels like our users are trading away their convenience for our misguided CYA policies.
In most large corporations you are basically not allowed to send anything that could even potentially hide a virus except for maybe Office files (nobody yet built a compelling alternative to Powerpoint and Excel).
Typical rules already block all executable binaries, scripts and password protected archives (because they could hold binaries or scripts), etc. As a Java developer I have recently discovered my company started blocking *.java files.
This has been brought up a million times in the context of DRM, but it is true in the general case as well.
"We approached it as 'Hey, we all love music.' Talk to the senior guys in the record companies and they all love music, too. … We love music, and there's a problem. And it's not just their problem. Stealing things is everybody's problem. We own a lot of intellectual property, and we don't like when people steal it. So people are stealing stuff and we're optimists. We believe that 80 percent of the people stealing stuff don't want to be; there’s just no legal alternative. So we said, Let's create a legal alternative to this. Everybody wins. Music companies win. The artists win. Apple wins. And the user wins because he gets a better service and doesn't have to be a thief."
Another point of reference: because they had no legal ground to stand on, HBO targeted Canadian torrenters of Game of Thrones with an e-mail saying, among other things, "It's never been easier to [watch Game of Thrones legally]!"
This was true, it had never been easier. It had also never been harder. For the entire time that Game of Thrones was being aired, the only legal way for Canadians to watch it was to pay about a hundred dollars per month for cable and the cable packages that would give them HBO. You could buy it on iTunes, but only as a season, after the season was over.
So yeah, I kept torrenting it, everyone I know kept torrenting it, and everyone hated (or laughed at, or both) HBO the whole time.
Here in the UK, Sky offer a cheap 'over-the-top' streaming alternative to their satellite offerings,  so you could watch Game of Thrones for £8/month, provided you didn't mind the inferior video quality.
I did actually add that to my subscription, and during lockdown have used it to re-watch Game of Thrones :)
Between that and whatever magic my OLED tv was doing, it looked pretty good to me.
Just a shame they haven't released it all in 4K/UHD yet...
I was really hoping to get an HDR version of the "The long night", to address some of the banding and other visibility problems present in the episode, and maybe see a bit more of what went on. But there isn't one yet. So I watched it with the lights out so that my eyes adjusted :)
But yeah, you're probably right, NowTv has massive potential to undercut their main offering.
(Though in this case it wasn't just competition – access to official servers in online games was something that was often not pirateable.)
I wish more of the security industry would get their frigging heads around this. PGP did less for messaging security over decades of availability than iMessage and Signal did in a few weeks of availability.
Sometimes I think that modern Windows is a nice platform already, even comfortable. (Like, you know, C++17 is very unlike C++98.) But then I'm reminded of the necessity to run an antivirus in front of it in a corporate environment.
Yes such a thing exists... https://www.mcafee.com/enterprise/en-us/products/virusscan-e...
I remember someone suggesting to put McAfee into a fully isolated container that only exposes the port where it reports compliance, allowing it to scan itself to death all day long.
"You must exclude our program sub directory because temporary files are created containing interpreted code and your antivirus will ether block it outright, or lock the file so long you get application time outs"
Antivirus software is malware.
Two days ago, I needed the script again but couldn't find it. Went to our e-mail thread and it said "the following potentially malicious attachments were blocked", showing mine, but... even from my outgoing mailbox? That seems ridiculous and problematic, considering that it sent fine at the time.
I know that e-mail shouldn't be used as a replacement for Sharepoint or Dropbox or whatever, and I should have a local copy of what I need, but it just seems annoying and arbitrary.
Anyway, I just logged into Outlook Web and downloaded it from the message there. Problem solved.
(I am not a lawyer.)
Having an option to allow them might be okay though. (I barely use gmail so I don't know if it has one or not.)
Before Dropbox and similiar it was far more a norm and various file sharing systems like SharePoint may wind up not actually used. Non-computer technical people often do so in companies all the time and practically use it as an ersatz version control system to the cringe of IT.
But most of our software lives on a RDP server anyways.
Seriously I don’t know how long it’ll last but a zip file into a fat32 disk image in a vmdk got through just fine.
The bonus is that 7zip can extract from vmdk.
I've been bitten way too many times by dumb filters that pick some file out of the zip and declare that it is malicious. I also don't trust messenger apps to not pull my files out and do who knows what with them. A basic password prevents this junk 99% of the time for almost no effort.
It won't stop a determined system from cracking the password. But that isn't what I'm trying to defend against.
It was not just for binaries but for scripts, html, etc.
I doubt the encryption was believed to be a security barrier.
I left the company 5 years ago. Just checked the login to see if it still worked.
Any disgruntled employee could change the password, lock them out of all of their sites (including several e-commerce sites that amount for a large chunk of revenue) and then if they really wanted to, delete all of them.
I remember talking the main network guy about any backups when a lot of the ransomware stuff was making the rounds. The big, really big stuff on their network (mostly ERP stuff) was backed up in two or three places. Their web stuff? Yeah. . . NOPE.
Pretty scary how lazy people are about stuff like that.
Also the contents of files like password[s].txt
So the password for August, 2020 would be “August, 2020”.
It's why I'm advocating within my organisation to get rid of password expiration and enforce 2FA for clients, but there's a lot of inertia to push against with some of them. At least uptake of 2FA is consistently increasing.
Scheduled password expiration weakens security by encouraging users to make predictable passwords, and by entrenching password resets as a routine and unscrutinized process.
But it doesn't stop you from spelling out the numbers instead, plus that makes your PW longer
I see you've worked in retail.
> Like ******* levels of shock.
These days a post it is probably the best way to secure your password.
99.9999999% of password hacks come over the wire now, from people in other cities, states, or nations. If someone is in your building, in front of the computer, even without the post-it, you're probably toast.
I know a brand-name healthcare company that uses Passw0rd for its internal WiFi, which is easily reachable from an interstate rest area.
I know most people at those kinds of organizations just don't have the grit to fight every one of those battles all over again, and choose to do the things they can affect with reasonable effort instead.
I'm not saying that grit would be a bad thing to have. I appreciate the people who do it. But you really can't know what kinds of situations the parent commenter was in, and sometimes you can't really expect everyone to want to fight it.
Its not a hard argument to win. Md5 here is fine, its not a security check.
The problem here is that people assume they know every possible reason why the auditor might ask for something, when they don't. If the auditor is asking for it, and it costs almost nothing to do, maybe just do it instead of wasting everyone's time by acting like you know the totality on the subject, and everyone will probably go home happier at the end of the day.
An auditor's job doesn't end at saying what things should be changed, it should include why as well (granted, we don't know the full content of the auditor's report here, maybe they did say why).
If using md5 had any real benefit I'd say leave it, but what are you gaining?
So use SHA-1 or SHA-2 or SHA-3 or if you really hate NIST standards for some reason then CubeHash or Skein or Blake2 or ...
Then change it again? If you use the most recent available NIST standard it should hopefully be a very long time before meaningful (let alone practical) attacks materialize (if ever). If you end up needing to worry about that in a security audit, consider it a badge of success that your software is still in active use after so many years.
Using an insecure hashing algorithm without a clear and direct need is a bad idea. It introduces the potential for future security problems if the function or resultant hash value is ever used in some unforeseen way by someone who doesn't know better or doesn't think to check. Unless the efficiency gains are truly warranted (ex a hash map implementation, high throughput integrity checking, etc) it's just not worth it.
> a security-related technology was used for a non-security purpose
I would suggest treating all integrity checks as security-related by default since they have a tendency to end up being used that way. (Plus crypto libraries are readily available, free, well tested, generally prioritize stability, and are often highly optimized for the intended domain. Why would you want to avoid such code?)
So yeah, I agree, add SHA-1 to the list of algorithms to reflexively avoid for any and all purposes unless you have a _really_ good reason to use it.
(1) Code is in part a communication medium. This says "We use MD5"
(2) Code changes. If some sees something cryptohashed, they may use it differently in 5 years.
And to be very fair, a lot of security issues would be caught with basic checkbox ticking. Are you using a salted password hashing function instead of storing passwords in plaintext? Are you using a firewall? Do you follow the principles of least privilege?
Why is this so difficult to grasp.
Sometimes people are just right.
I don't need some coworker getting into some drawn out battle about how MD5 is fine to use when we can just use SHA (or CRC32C as that person did, which is more obviously non-useful for security contexts) and be done in 30 minutes. The auditor is there to do their job, and if what they request is not extremely invasive or problematic for the project, implementing those suggestions is your job, and arguing over pointless things in your job is not a sign of something I want in a coworker or someone I manage.
This is exactly what the auditor is doing.
How can you not see the irony here?
> I don't need some coworker getting into some drawn out battle
This isn't a drawn out battle. This is a really fast one, md5 is fine here, you didn't check the context of its use, thats fine, whats the next item on your list?
Whats fucking hard about that?
Is this some kind of weird cultural thing with American schooling teaching kids they can't question authority?
The auditor was asked to do it and is being paid to do it. Presumably, the people arguing are paid to implement the will of those that pay them. At some point people need to stop arguing and do what they're paid to do or quit. Doing this over wanting to use MD5 seems a pretty poor choice of a hill to die on.
> This is a really fast one, md5 is fine here, you didn't check the context of its use, thats fine, whats the next item on your list?
There are items like this all throughout life. Sure, you can be trusted to drive above the speed limit on this road, and maybe the speed limit is set a little low. But we have laws for a reason, and at some point you letting the officials know that the speed is two low and they really don't need to make it that low goes from helpful to annoying everyone around you.
> Whats fucking hard about that?
Indeed, what is so hard about just accepting that while you're technically correct that MD5 isn't a problem, you're making yourself a problem when you fight stupid battles nobody but you cares about, but everyone has to deal with?
> Is this some kind of weird cultural thing with American schooling teaching kids they can't question authority?
Hardly. Pompous blowhards exist in every culture. Also, that's hilarious. Your talking about a culture that rebels against authority just because they think they that's what they're supposed to do, even if it's for stupid reasons and makes no sense. See the tens of millions of us that refuse to wear masks because it "infringes on our freedom".
I'm paid to tell idiots where to go. My boss doesn't pay me 6 figures to toe the line and fill in boxes. She pays me to use my judgement to move the company forward. I'm not wasting my time and her money on this sort of garbage and if they can't see the difference between casual use and secure use them we need to rethink our relationship with this company or they need to send us someone new.
> Your talking about a culture that rebels against authority
You just used the line "do what you're told or quit".
The cognitive dissonance here is unreal.
I've very specifically couched all my recommendations for this for when it's trivial to do. Arguing about this with someone instead of doing it, when doing it may have some benefits but really only costs a few minutes instead of just doing so is definitely wasting her time and money.
> You just used the line "do what you're told or quit".
I noted what I wished people would do in very specific cases where they're wasting way too much time and effort to win a stupid argument rather than make a small change of dubious, not possibly not zero, positive security impact.
I don't see anything weird about acknowleding some of the extreme traits of the culture I live in while also wishing they would change, at least in specific cases where I think they do more harm than good.
Honestly, I'm confused why you would even make some cognitive leap that since I live in an area with a specific culture I must act in the manner I described that culture, especially when I did it in a denigrating way. I guess you think all Americans must be the same? That doesn't seem a useful way to interact with people.
That's more assumptions than it is sometimes reasonable to make.
"You don’t actually need to listen to auditors" is decidedly not true for a lot of people in a lot of situations, and arguing even for technically valid or reasonable things is an endurance sport in some organizations.
I mean, I even kind of want to agree with heavenlyblue's argument that you should fight that fight for the exact reason they're saying, and can see myself arguing the same thing years ago, but at least in case of some organizations, blaming people for taking skissane's stance would be disproportionate.
If you're working with irrational people you're going to have to do irrational things, but that's kind of a given isn't it? We don't really need to discuss that.
ETA: per 'kbenson it's not hard to conceive of a situation where proscribing MD5 is reasonable. Taking 'skissane's account at face value is probably reasonable, but my implicit assumption that the auditor would not explain if pressed isn't being charitable.
Specially with the audit/pen test theatre where they have to put something in the report, otherwise why are they getting paid £20K for two days work?
So most people choose the past of least resistance, when it doesn't matter much, so that you fight where it does.
I for one like to pick the easy wins, like this.
For now 10 years, I refuse to acknowledge the finding of the consulting company which flags the password scheme I use (passphrases) because the norm they use (a national one) talks about czps, symbols etc.
I refuse to sign off and note that our company is a scientific one and to the difference of the auditors, we understand math taught to 16 yo children.
This goes to the board who gets back to me, I still refuse on ethical gtounds and we finally pass.
This is sad that some auditors are stupid when some other are fantastic and that you depend on which one you get assigned.
A good read: https://serverfault.com/q/293217/78319
Similarly, sometimes in order to sell products to government agencies you need to get security audits done. In that scenario, you have to listen to the security auditor and keep them onside, because if you don't keep them happy your ability to sell the product to the government is impeded.
Meanwhile I have been finding and fixing real security issues regularly. To be fair it would be extremely difficult for an external person to find issues in the limited time they have so the audit comes down to someone running through a list of premade checks to see if they find anything.
The other issue is that if you make it seem too easy to answer their questions or provide reports, they will only ask more questions or demand more reports so even if its just dumping a list of users into a CSV file for them to review, make it seem like way more effort than it actually is otherwise you might find you've been forced into a massive amount of busy work while they continue to boil the ocean.
Honestly, I'm quite happy to have an auditor nitpick a few non-issues if the alternative is risking releasing an app that has a basic sql injection attack that wiggled past code review due to code complexity.
I've also had an external audit that found an unreported security issue in a new part of a widely used framework, so there are auditors out there that do a good job of finding legitimate things.
I told my manager that they are idiots and I won't listen them, he was like 'OK, as I expected' never done anything about it, next auditors didn't mentioned it.
There are plenty of addresses where the official version in databases is slightly off from what people actually write on their mail. If I got a credit card transaction with the "official" version, that would be a significant fraud signal, that they were sourcing bogus data from somewhere.
Same with the md5 complaint. That use of md5 wasn't a problem but there's a perfectly fine alternative and if you can ensure by automated tests that md5 is used nowhere, you also can guarantee that it's never used in a security relevant context.
You can automatically check for the string "md5" in identifiers, but you can't reliably automatically check for implementations of the MD5 algorithm. All it takes is for someone to copy-paste an implementation of MD5 and rename it to "MyChecksumAlgorithm" and suddenly very few (if any) security scanning tools are going to be smart enough to find it.
(Foolproof detection of what algorithms a program contains is equivalent to the halting problem and hence undecidable, although as with every other undecidable problem, there can exist fallible algorithms capable of solving some instances but not others.)
For the more common scenario of internal sites/services which are not accessible from the public Internet, but not fully isolated from it either:
You don't need the internal site exposed to the Internet. If you use DNS-01 ACME challenge, you just need to be able to inject TXT records into your DNS. Some DNS providers have a REST API which can make this easier.
Another option – to use HTTP-01 ACME challenge, you do need the internal host name to be publicly accessible over HTTP, but that doesn't mean the real internal service has to be. You could simply have your load balancer/DNS set up so external traffic to STAR.internal.example.com:80 gets sent to certservice.example.com which serves up the HTTP-01 challenge for that name. Whereas, internal users going to STAR.internal.mycompany.com talk to the real internal service. (There are various ways to implement this – split horizon DNS, some places have separate external and internal load balancers that can be configured differently, etc)
Yet another option is to use ACME with wildcard certs (which needs DNS-01 challenge). Get a cert via ACME for STAR.internal.medallia.com and then all internal services use that. That is potentially less secure, in that lots of internal services may all end up using the same private key. One approach is that the public wildcard cert is on a load balancer, and then that load balancer talks to internal services – end-to-end TLS can be provided by an internal CA, and you have to put the internal CA cert in the trust store of your various components, but at least you don't have the added hassle of having to put it in your internal user's browser/OS trust stores.
(In above, for STAR read an asterisk – HN wants to interpret asterisks as formatting and I don't know how to escape them.)
So I guess an internet-connected system grabs the certificates, then they get burned to DVD-R, then... a robot moves the DVD-R to the internal network? It's not easy. It's all much worse if the networks aren't physically adjacent. One could be behind a bunch of armed guards and interlocking doors.
You're often not arguing with the auditor, you're arguing with the person who paid the security auditor in the first place who is likely not even technical. That's a battle toy will likely never win.
I once got cited for having too many off-site backups. They were all physically secure (fire proof safes or bank lock box), but the site visitor thought onsite was fine for a research program. The site visitor's home site lost all its data in a flood.
At my company, that's a one-way ticket to the unemployment line.
If you need to be operating in FIPS 140 mode, that may be a problem of some consequence.
In isolation it looks like wasted work but in terms of organizational behavior it is actually the easiest way.
Publish the result for market comparison's sake.
Then again, that requires plenty of money and I can't see how to monetize that in any way.
Still, when it comes to security:
- MD5 is actually too fast for hashing passwords, but there is still no better way than bruteforce if you want to crack md5-hashed-salted passwords.
- Even if there is no effective preimage attack now, it is still not a good idea to use an algorithm with known weaknesses, especially if something better is available.
What MD5 is useless for is digital signature. Anyone can produce two different documents with the same MD5.
Defense in depth, if you can grep the source code and not find any references to md5, then you have quickly verified that the code probably doesn't use md5.
This you can easily verify again later, you can even make a test for it :)
Even if in practice this had no impact, removing md5 usage, will make it harder to accidentally introduce it in the future.
All it means that the audit is superficial and doesn't catch the error category, just famous examples within that category. That kind of superficial sanning may be worth something when unleashed on security-naive developers or even as optional input for more experienced ones. But "hard compliance rules" and "superficial scans" combine to create a lot of busywork which makes people less motivated to work with auditors instead of against them.
The resulting situation might of course not be a net benefit though :/
The fact is that if you have experienced engineers a security audit is rarely able to find anything. You would basically have to do code reviews, and this is hard / expensive, and even then rarely fruitful.
So, superficial scans, hardening, checking for obvious mistakes is really all you can do.
Making hard rules is unproductive, but then again, migrating from md5 to crc32 hopefully isn't very expensive.
IMO, crc32 is a better choice for testing for changes, and has the benefit of removing any doubt that the hash has any security properties.
(Unless the person looking over your shoulder has a really good memory and can remember the Base64, or decode it in their head. Or they have a camera.)
It's game over anyway if someone has a shell on your server but at least it complicates their life a bit.
// We use MD5 to check if config files are changed. This is not used anywhere else.
typedef DigestMD5 ConfigFileHasher;
I always assume that people from the future who are going to touch my code are really dumb people, so I try to have as few traps as possible for them.
When the intention is a debugging server, making it exposed to the world is a mistake and a security vulnerability. At that point it is effectively a backdoor, but the difference between a high level vulnerability such as this and a backdoor is developer intent.
One can read every second week about cases where some backdoor that was meant to be used "only for debugging" landed in the end product and became a security problem.
Actually I usually suspect malice when something like that is found once again, as "who the hell could be so stupid to deliver a product with a glaring backdoor". But maybe there is something to Hanlon's razor… :-D
The difference, in my opinion, is in the documentation and frequency of use. Is it overt? Does the customer really know its there, and what its for?
Perfectly fine to have an access panel that gives you access to the buss .. if the pilot knows you're doing it.
But if its some random entrance in the back of an alley, only 2 or 3 users in the universe know what it is and how to use it ..
*Recently a reputable tech site wrote an article introducing DJI (ostensibly a company needing no introduction) as "Chinese-made drone app in Google Play spooks security researchers". One day later the same author wrote an article "Hackers actively exploit high-severity networking vulnerabilities" when referring to Cisco and F5. The difference in approach is quite staggering especially considering that Cisco is known to have been involved, even unwittingly, in the NSA exploits leaked in the past.
This highlights the sentiment mentioned above: people ask the question only when they feel comfortable that the answer reinforces their opinion.
Since they just wanted to add some new features on top and present a better rack-based interface to the user, they decided to build a bigger box, put one of the old devices inside the box, then put a modern PC in there, and just link the two devices together with ethernet through an internal hub also connected to the backpanel port and call it a day.
The problem is, if you do an update, you need both the "front end" and the "back end" to coordinate their reboot. The vendor decided to fix this by adding a simple URL to the "backend" named: /backdoor/<product>Reboot?UUID=<fixed uuid>
Their sales team was not happy when I showed them an automated tool in a few lines of ruby that scans the network for backend devices and then just constantly reboots them.
They still sell this product today. We did not buy one.
They sold very expensive devices that were actually an off-the-shelf 1U PC with custom software (which provided the real value). The problem — and this dates it — was that the PCs had a game port¹, which gave away that this custom hardware was really just a regular consumer PC. So they had some fancy plastic panels made to clip on the front and hide the game port.
(With Unisys specifically, at one point they still made physical CPUs for high end models, but low end models were software emulation on x86; I’m not sure what they are doing right now.)
It was my first exposure to this sort of thing, and I was taken aback by the costs of this stuff, which made the Sun gear I worked with look extremely cheap :)
Given the shrinking market share of mainframes, the only way for vendors to continue to make money is to increase prices on those customers who remain – which, of course, gives them greater encouragement to migrate away, but for some customers the migration costs are going to be so high that it is still cheaper to pay megabucks to the mainframe vendor than do that migration. With emulated systems like the ones you saw, the high costs are not really for the hardware, they are for the mainframe emulation software, mainframe operating system, etc, but it is all sold together as a package.
At least IBM mainframes have a big enough history of popularity, that there are a lot of tools out there (and entire consulting businesses) to assist with porting IBM mainframe applications to more mainstream platforms. For the remaining non-IBM mainframe platforms (Unisys, Bull, Fujitsu, etc), a lot less tools and skilled warm bodies are available, which I imagine could make these platforms more expensive to migrate away from than IBM's.
I made the terrible mistake of jumping too far between versions and the update broke iDrac and thus the server. There was no warning on Dell's website nor any when I applied the update. I only found out what happened after some googling where I found the upgrade path I should have taken.
This is just terrible quality control and software engineering.
It's even openly called "backdoor" in open source code directly related to it: https://github.com/vmware/open-vm-tools/blob/master/open-vm-...
> I must warn you about those jokes. Firstly, they are translated from Russian and Hebrew by yours truly, which may cause them to lose some of their charm. Secondly, I'm not sure they came with that much charm to begin with, because my taste in jokes (or otherwise) can be politely characterized as "lowbrow". In particular, all 3 jokes are based on the sewer/plumber metaphor. I didn't consciously collect them based on this criterion, it just turns out that I can't think of a better metaphor for programming.
Manhole is, indeed, an outdated term. Generally the preferred term is "Maintenance Hole". Still abbreviated MH, and people in the field use all three interchangeably (much like metric/imperial).
Source: I work with storm/sanitary/electrical maintenance holes.
Not necessarily, but see: https://en.wikipedia.org/wiki/Gender_neutrality_in_English#D...
The link is about the debate as it is, but I would also encourage the use of good faith in interpreting any speaker: that is, assuming a person referring to "mankind" likely means all humans without exclusion based on gender or sex, and requiring some other material evidence before presuming bias.
I also wonder what these discussions are like in languages where most nouns are gendered, e.g., in French.
OK, I exaggerate, there are still people that don't try to be "politically correct" and still use proper language, and know that there is such a thing called "Generisches Maskulinum (English: generic masculine)". But in more "official" writings or in the media the brain dead double-forms are used up until the point you can't read such texts any more: Those double-forms (which are not correct German) cause constant knots in the head when trying to read a text that was fucked up this way.
(Sorry for the strong words but one just can't formulate it differently. As the existence of that browser extensions shows clearly I'm not alone when it comes to going mad about that rape of language. Also often whole comment sections don't discuss a topic at hand but instead most people complain about the usage of broken "gendered" pseudo-politically-correct BS language. That noun-gendering is like a disease!)
The way it kicks words previously loaded with neutrality in the curb but happened to have the same spelling as the gendered one, and entrenches a two-gender paradigm boggles the mind as to how it flies in the face of any form of inclusivity.
That and I still don’t know how to read “le.a fermi.er.ère” aloud. It’s just as ridiculous as “cédérom” because Astérix puts up a show at standing against the invader.
many instances of this are simply an artifact of 'man' previously being an un-gendered term. but that fact is much harder to build group cohesion around than grievance.
EDIT: didn't see the "or even" there. Disagree. I think the analogy can be drawn out a bit, so I'll say that a bruise can heal pretty quick, and one would adapt better to climbing "hills" if they exercised regularly. Plus maybe smaller hills should be climbed too.
Given US security apparatus clearly values and desire these back doors and have the necessary power to coerce companies to making them, generalizing the use of "back door" as a term for debugging or w/e seems almost expected.
Even if they are for debugging "oops it's on in production!" is a great cover because none of these companies will EVER admit back doors were required by the government.
The same place had a boot script on every computer that wrote to a network-mounted file. Everyone had read permissions to it (and probably write, but I didn't test) and the file contained user names, machine names, and date-times of every login after boot for everyone on the domain going back 5 years. I opened a ticket for that, which was never addressed.
>This code, to us, appears to involve the handling of memory error detection and correction rather than a "backdoor" in the security sense. The IOH SR 17 probably refers to scratchpad register 17 in the I/O hub, part of Intel's chipsets, that is used by firmware code.
When we talk about CPU it's bad enough. Think that your program has an input and output streams where most of the app data goes through and I can attach debugger and listen on the data.
I would not be very happy about it and would still consider it backdoor.
Edit: files are here
So seems like the barest you can do is "disable seeding", not "use a VPN".
(i) acquisition of a trade secret of another by a person who knows or has
reason to know that the trade secret was acquired by improper means; or
(ii) disclosure or use of a trade secret of another without express or
implied consent by a person who
(A) used improper means to acquire knowledge of the trade secret; or
(B) at the time of disclosure or use knew or had reason to know that
his knowledge of the trade secret was
(I) derived from or through a person who has utilized improper means
to acquire it;
(II) acquired under circumstances giving rise to a duty to maintain
its secrecy or limit its use; or
(III) derived from or through a person who owed a duty to the person
seeking relief to maintain its secrecy or limit its use; or
(C) before a material change of his position, knew or had reason to
know that it was a trade secret and that knowledge of it had been
acquired by accident or mistake.
I think the tg:// link is just the site trying to open up in the Telegram app.
The torrent works.
Asking because as someone who uses FF as their daily driver and is surprised something is supported in it that isn't in Chrome...
> it's useless
does not compute
(At least the last time I had to download from MEGA, I RE'd what it does and it was somewhat clever - AES128 in counter mode, key is in the hash part of the URL.)