Also, the validation requirements to obtain a code-signing certificate, while certainly not bulletproof, are not nothing: you need to send in articles of incorporation and your business needs a listing with a physical address and phone number in a public directory (e.g., bbb.org), and someone representing your business needs to pick up that phone when the cert validator calls it.
Your business name and physical address are injected into the certificate. Basically code-signing certificates make it easier for people to find you and sue you if they truly want to. I suspect that's the whole point.
The problem here is that the Notepad++ developer wants his certificate to say CN=Notepad++, but he won't be able to obtain that until he has some kind of business or organization registered in his jurisdiction with that name. Whereas CN=FIRSTNAME LASTNAME he could probably obtain immediately (just send in his driver's license during validation).
Why? If I expect to make four figures on spreading malware/adware, and I can assuage the nerves of people like you by spending two or three figures on a certificate, I'm going to buy the certificate and make it look all nice and pretty and take your money.
> Your business name and physical address are injected into the certificate. Basically code-signing certificates make it easier for people to find you and sue you if they truly want to. I suspect that's the whole point.
So I have to incorporate in Delaware, make up a fake address, and rent a burner phone for a while. I'm not seeing the downside.
> Delaware does not require director names and addresses to be listed in the Certificate of Incorporation.
Of course, MSFT/Apple etc will abuse it to kill apps they/govt don't like.
Code signing is a bit like gun control. It really doesn't solve the problem at all. It just pushes it up a level, and makes things more difficult for legitimate users.
It also lines up incentives such that the preferred model of software distribution shifts in the grand scheme of things toward for profit code.
While code signing is a neat technical solution, it's still a technical solution parading about as a solution to a social problem. And the social problem it is a solution to (that of untrustworthy folks existing) is not in any way mitigated by the act of signing as mentioned previously.
Although it costs $19 or $99 to sign up, one time.
They could enable this for win32 apps but they want to push things towards the walled garden.
Why can't the unsigned app be globally disabled? Is this not the basic premise behind Windows Defender and every other antivirus?
This is only true if there is some trust in what is signing them. If anyone can get one then anyone can sign the malicious version of the app with their own key, or one they stole from someone else. The user doesn't know who is supposed to be signing the app -- and if they did then you could be using TOFU or importing the expected source's key from a trusted channel without having to pay fees to anyone.
> and if it was malicious from the start, it can be disabled.
In the same way that Defender can block it. Then the attacker makes a new version signed with a different key.
The problem with CA-based signing is that it's a garbage trade off. If you make it easy to get a signing key, the attacker can easily get more and it does nothing. If you make it hard, you're kicking small developers in the teeth.
> The non-signed up can have zillions of malicious variants which something like Defender may or may not catch.
Which is still possible with code signing. The attacker gets their own key, uses it to infect many users, then some of those users are developers with their own signing keys and the attacker can use each of those keys to infect even more people and get even more keys.
Using keys as a rate limiter doesn't really work when one key can get you many more.
> It also gets a shot of circumventing (or even exploiting) AV.
As opposed to a shot at exploiting the signature verification method and the AV.
There is a better version of this that don't require expensive code signing certificates. You have the developer host their code signing key(s) on their website, served over HTTPS. Then the name displayed in the "do you trust them" box is the name of the website -- which is what the user is likely more familiar with anyway. If the program is signed by a key served on the website, and the user trusts the website, then you're done.
The application itself can still be obtained from another source, only the key has to be from the developer's website. Then future versions of the software signed with the same key can be trusted, but compromised keys can be revoked (and then replacements obtained from the website again).
This is better in every way than paying for EV certificates. It doesn't cost the developer anything, because they already have a domain (and if not they're very inexpensive and independently useful). But the attacker can't just register thousands of garbage domains because they're displayed to the user and nobody is going to trust "jdyfihjasdfhjkas.ru" or in principle anything other than the known developer's actual website, which the user is more likely to actually be familiar with than the legal name of the developer or their company.
And the technical arguments in favor of code signing are weak. They started off claiming a major benefit -- globally disable malicious code. Except that AV can do that too. The argument in favor of having code signing on top of that then becomes weaker -- AV can stop identified malicious code but it can't stop other malicious code from the same malware author. Except that code signing can't do that either since the malware author can sign other versions with different keys. So then the argument becomes, well, at least it rate limits how many different versions there are. Except that is only meaningful to the extent that getting a new key is arduous and not a lot of people have them, otherwise the attacker can get arbitrarily many more by either just applying for more under false identities or by compromising a moderate number of machines to capture more keys from the large number of people who have them. Moreover, using domain validation would already capture the case where you want to get the incremental benefit achievable from a minimal imposition on the developer.
Meanwhile the process of obtaining a code signing key has to be sufficiently easy and non-exclusive that even individual developers can reasonably do it, so making it purposely more arduous than that is a directly conflicting requirement.
The explanation is long because the details are relevant, not because anything "obviously technically inaccurate" is there.
Blocking a specific executables block that one. Depending on AV used, simply rebuilding may get you through (different hash); some trivial modifications will do.
The big scammers, the ones who are the most likely to have an actual business plan for selling my information, aren't.
Note that I know Adobe is "trusted" in the relevant sense.
Note that I don't think Adobe is trustworthy in any real sense.
The scheme shifts the need to trust from Random Q. Hacker to the certificate-issuing authority, and that only helps if the authority is more trustworthy than the individual. If they don't put forth an effort to really dig in to those applying for certificates, they're just selling costumes for the security theater.
I trust Microsoft more than someone I have never heard of, but I don't inherently trust them more than the informal assembly of Notepad++ contributors and lead FOSS developer Don Ho. If Microsoft's code-signing certificate validation process is not capable of recognizing organizations that are not formally incorporated, and allowing them to use the name of their brand, rather than the names of their lead developers or maintainers, they are leaving a huge fraction of my installs hanging in the wind.
People might forget that "caveat emptor" still applies, even in a walled garden.
The same goes for developers, as you can see in this article.
This is a misapplication of Bayes theorem.
P(bad) is the probability any app is bad
P(signed) is the probability any app is signed
P(bad if signed) = P(signed if bad) * P(bad) / P(signed)
Malware authors have incentive to make their apps appear legitimate, either by stealing keys, impersonating companies, or other mechanisms. Signing also helps get past automated checks (per the paper above).
Further, those probabilities assume random distribution, but I'd suggest that really expensive/dangerous malware has greater incentives to appear safe, so it is even more likely to be signed, even if most malware is not. Stuxnet would be a case in point - high value, sophisticated malware, signed.
P(really bad if signed) = P(signed if really bad) * P(really bad) / P(signed)
P(really bad) is lower,
but P(signed if really bad) approaches 1,
so P(really bad if signed) approaches P(really bad)
It's a fair belief because paying for something leaves a paper trail. MS certificates are a farce at $500/year, but Google's $20 once is a very reasonable thing.
Your point that really bad malware has higher odds of being signed is a good one, but really bad malware is much less likely than simply "malware".
When the Transmission bittorrent client site was hacked to distribute ransomware, it was signed using an unrelated certificate that was likely stolen. This happened twice within a year, with different valid (stolen) certificates:
Stuxnet certificates were also stolen.
This disproves the GGP's premise that a signed app implies the developer paid for it, as well as your assumption that the paper trail for legally acquiring a certificate is an impediment to signing malware.
You're not only trusting the developer who purchased the certificate and the CA that granted the certificate, but also trusting the ongoing security of everybody else who has purchased a trusted certificate. That's a pretty open circle of trust.
Certificate revocation can limit the time of exposure once malware is distributed, but it isn't always implemented.
"they found 189 malware samples bearing valid digital signatures that were created using compromised certificates issued by recognized certificate authorities and used to sign legitimate software. In total, 109 of those abused certificates remain valid."
Also the fact that you are required to have a corporation, why? If I develop again an open source software I need to register a corporation just to deploy that software on Windows the correct way? That is total bullshit for me, just let me sign my software like is done on other platforms for nothing or a very small fee and done.
$20 doesn't get you an EV certificate anywhere.
We're talking about non-trivial hundreds of dollars per year, which is completely unsustainable for an open source driver for example.
$61/yr (USD) will get you an OV cert - there are 10% discount codes that are easy to find for these guys, and their list price is $67/yr:
But the Notepad++ guy will need a business registered with that name before he can obtain CN=Notepad++, no matter how much he's willing to pay.
Yes, that's correct. EV just skips the reputation building phase.
> Programs signed by an EV code signing certificate can immediately establish reputation
with SmartScreen reputation services even if no prior reputation exists for that
file or publisher.
No user will understand this though.
Signing and certificates revolve around trust/mistrust in the delivery channel not in the purveyor.
That problem can be solved with other tools, like PGP. You don't have to be blackmailed by a platform's certificate racket.
It kind of works that way in Linux world where artifacts are PGP signed and to get your key into distro store one has to have "reputation". With the caveat that different distros have different schemes.
X.509 used by Windows has two nice properties that PGP doesn't - certificate attestation (MS can be sure your private key is on a hardware token) and timestamping (even if the cert expires if the signature has a timestamp it's still valid).
... none of them financial.
I'm not saying that financial incentives are bad, necessarily, but I am saying that being able/forced to buy your way in privileges the most organized scammers, the ones who have a cogent business plan to make money from their chicanery and some seed capital, over programmers who don't have money, have no expectation of making money, and are only motivated by getting their code out there and used.
Debian has a Social Contract. Microsoft has a pricetag. I know which of them Adobe is more comfortable with.
This works both ways because legitimate software developers also don't have easy ways of pushing their signed software to end users. Usually step 1 in installing software from external developer is "get my PGP key imported" .
I don't mean Linux distro's model is worse or that Windows model is better. What I mean is that none of them is significantly better than the other. Just different with different trade-offs.
Even #%@! Oracle does it:
I wonder what’s the point of the PGP key then.
Trust on First Use. Once the key is imported it stays the same.
People working for that organization can sign the key to attest it's real (Web of Trust). Although I wonder how would they check it. Organization (non-individual) keys are weird because ultimately it's just an individual behind it.
"No export" flag is not the same. What I'm talking about is keys stored in hardware modules (TPM, Yubikey) so that the private key is never disclosed, you can only ask the hardware to perform actions using that key.
See for example Yubikey docs: https://developers.yubico.com/PIV/Introduction/PIV_attestati...
> There are many out there, maybe you could get a false timestamp out of them.
Maybe? That's how CA model works, they are trusted third parties. Code signing CAs are required to operate timestamping services so it getting a cert from them is not a security issue, timestamping should also be fine.
PGP on the other hand if used in a Web of Trust model makes every valid key a CA. Not to mention that PGP doesn't have extended key usage flags so signing software is the same as signing e-mail (you cannot specify that you want to have this key be used for code signing exclusively).
> you need to send in articles of incorporation and your business needs a listing with a physical address and phone number in a public directory (e.g., bbb.org), and someone representing your business needs to pick up that phone when the cert validator calls it.
So you only trust code that comes from businesses?
Ever notice how most con-men wear nice suits? Your advocating for the digital equivalent.
Even in good old bureaucratic Germany this will take you less than an hour and cost you about 30€. Can't believe it can be much worse anywhere else.
I don't trust poor people either. Bigger chance that they're scaming .... because they need the money.
Reputation requirements either shouldn't have backdoors or shouldn't exist in the first place.
Arguably the Reputation requirement is more helpful than the information held in the certificate, since Reputation is hard to fake whereas that information is provided by the requestor and its validation depends on the CA's processes (which as I said varies wildly).
It is one of those "greater good" things. It does suck for FOSS however.
EV certificates are literally a reputation requirement backdoor.
If EV-signed apps had to deal with the same SmartScreen reputation requirements as non-EV-signed apps, Microsoft might actually have to address this issue brought up in the parent comment:
> Every time you have to get a new one, with same story of "reputation" again.
I will definitely pull this thread out next time someone complain that Apple is too expensive and that they are milking the poor developers...
For 400$/ year you're getting nothing. I was told here that I could go lower, but this is what I was paying for last decade.
Other companies also milking their developers does not invalidate this argument.
(The only acceptable price is $0, IMO.)
Also, Windows "trust system" seemingly works also on unsigned binaries. If you don't disable bunch of security settings, you would be blocked similar to self-signed certificates on HTTPS. You can go around, by clicking tiny button. Is enough users do it, software getting flagged as OK.
Certificate solves that this should be done per certificate, not per binary.
I've used them for years and can recommend them.
The parent comment's issue is that EV certificates are essentially required due to the poorly-designed SmartScreen reputation filter. The $85/yr certificate you're mentioning doesn't help solve this.
I've never used an EV cert before for code signing. When we first started, I think Smartscreen was a nuisance for about 2 weeks, but years on, and I've never had to think about it again. Even when we've renewed the cert.
When you renewed the cert did you use the same key pair? (I'm wondering how does Microsoft correlate reputation).
The current code-signing-certificate model is pointless, regardless of price.
Having said that, Smartscreen is opaque, and a nuisance.
Oh, no. We just kept renewing our EV certs with them for past several years... if only we'd known that we can't. Damn. Such an amateur shop this Digicert. Unacceptable.
It's a nightmare. Complete scam.
I needed this for Polar: https://getpolarized.io/
Mind you... it's Open Source but I still want my users to be able to download it without warnings.
No joke - it took me 2 weeks to get the CSC with about 4 hours per day working on just this CSC issue.
It's just a labyrinth of insanity from not having a listing on D&B to them insisting I pay $2k to expedite it.
I still don't have one from Apple because it requires a D&B number so I had to get a personal cert from them.
I went with a cheap one for Windows BUT it gives errors on install for like the first 1k downloads until Windows says it's legit.
It's a complete scam.
BTW.. if you get in the MS App Store you don't have to worry about a CSC so that's good I guess.
Last I checked expedited D&B was around $40 USD (10 business days) and same-day D&B around $500 USD.
Free D&B said it would take 30 business days, but it actually only took them 5 business days when I applied for it.
First, we ran Windows under Parallels on a Mac Mini (we needed the build machine to handle Mac builds as well). I think I set the Windows VM to never lock the screen or sign you out, but the physical Mac would lock its own screen as usual. You could set things up the same way with a VMware VM running on a Windows host.
Then the only problem to solve was how to type in the signing password every time the certificate utility popped up its password dialog.
Like so many things on Windows, it was AutoHotkey to the rescue! I wrote a little 5-10 line AutoHotkey script to watch for the password dialog opening, type in the password and hit Enter.
Bingo, we had fully automated EV code signing for our Windows builds.
You mentioned that logging out or letting the screen go to sleep would lock the cert. I don't quite remember it that way, but I could be remembering wrong. It seemed that the certificate utility simply wanted a password typed into its dialog for every code signing.
So it may be that this AutoHotkey setup would also work with the cert utility running on a physical Windows machine with normal screen locking. In any case, it definitely worked great in a VM.
Of course this meant that we had to store the signing password in plain text inside the VM, but that was a lesser evil than requiring someone to babysit the machine whenever we pushed a build.
I did need to get into D&B, and it was a bit of a faff - their website is a maze, and it took around a week after filling the form to get listed. Didn't need much time on it though.
One of the other requirements I had to fulfil was having a telephone number published in a sanctioned list of websites for a callback - so I registered a Skype number, published the number, did the callback, and terminated the number. Not sure what that was meant to prove...
Now all we have are app store and certificate rackets. Im looking at Google and Apple too. Shame on the industry for accepting 30% revenue share on their services. The idea of an app store is great but not when it excludes other legitimate ways of installing software on device.
These practices are anticompetitive and monopolistic.
Good for Notepad++. I couldnt agree more with its sentiment.
Is there any evidence that was ever really a thing / effective?
How could you possibly know?
There are plenty of examples of previously trustworthy software becoming untrustworthy, same with sites you download the code from.
That line reads like the absurd advice that security experts put out about "only download something you trust" and ignoring that nobody has a clue how to evaluate that aside form say limiting them self to FOSS and reading all the code...
This is mostly a meme from the overzealous FOSS and privacy crowd, not the security crowd. Professional security engineers do not, as a rule, encourage software engineers (or end users more generally) to only use open source software because "you can inspect the code for vulnerabilities."
Anyone with legitimate security expertise will understand the benefits of specialization and core competencies. Namely that despite the ideological perspective of many in the FOSS community, it is actually better to trust someone else with the security of your software. Because you most likely can't trust yourself with that task anyway.
The idea that most people can reliably identify security vulnerabilities in the software they use just because it's open source is laughable. They might find trivial low hanging fruit or obvious malicious activity, but they won't have a better picture of the overall security posture just because they can read the code.
As an obvious case in point, consider how few people identify vulnerabilities in Firefox versus how many people use Firefox. The people who write complex open source software don't even reliably find the issues in their own code.
I intended it to just be a comparison of the "only download something you trust" absurd advice you get from "security" people you see on TV or something... and what the user was suggesting about the old days.
My FOSS comment was really meant to reflect the absolute rabbit hole you go down when it is suggested people can simply protect themselves. It's never ending flow of tasks and things you need to know that I don't think anyone can do ...
All else being equal, I'd certainly have more trust in the FOSS version. Yes, i won't audit it myself, but source we can compile ourself is still easier to audit. As such I'd hope more people will have eyes on it than without source access.
Same for a hash of the executable I download being generated by an reproducible build. And I prefer to download and run the same installer as everyone else than someone offering a custom download link just for me.
Non of that means I don't have to trust the project/maintainer. But it is a bit of extra safety I want in some cases, e.g. for a password manager.
And many 'professional security engineers' ignore core security (e.g. auditing protocols, connections, user access, etc) to push AV software, 2fa tokens, NIDS, version/patchlevel compliance infrastructure, and other 'security tools', because this is easier and comes with 'vendor support'.
I don't argue that these tools are a 'meme' however simply because there are a few people that don't understand the whole picture.
I could just as easily argue:
"The idea that most people can reliably identify security vulnerabilities in the systems they use just because it's protected by vulnerability scanning tools is laughable. They might find trivial low hanging fruit or obvious malicious activity, but they won't have a better picture of the overall security posture just because they can read the audit report."
Don't forget, you have to compile from source as well. I'm thinking the parent you replied to forgot how awful sourceforge was, and even trustworthy projects could have garbage bundled in.
(In reference to the classic paper: https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p7...)
I'm thinking along the lines of google's blog on Meltdown and Spectre recently that basically said if two things are on the same processor.... there's nothing you can do now security wise and be sure about it.
> Hi, president of SourceForge here. Glad this is trending, albeit a few months later. These articles seem to trend on HN every few months, with many people not realizing SourceForge changed ownership in 2016 and that the new team's been working hard on improving. To be clear, we had nothing to do with the bundled adware decisions of 2015, and when we took over in 2016, the first thing we did was remove the bundled adware, as well as institute malware scans for every project on the site. We're working hard to restore trust, so if we win some of you back that would be cool. However, we're just focused on doing right by our million daily users.
Let me guess: you also dislike GitHub because it's closed, and wish people would distribute software from their own, self-hosted git repositories?
Sourceforge is a hostile source of malware : https://mail.gnome.org/archives/gimp-developer-list/2015-May...
You mock those who desire freedom at your own risk. Github is microsoft now, and supporting it feeds the beast.
Under new management, SourceForge moves to put badness in past https://arstechnica.com/information-technology/2016/06/under...
Oh no, it's the 90s and Micro$oft is evil! They're going to embrace, extend, extinguish all my open source software! I won't be free to develop software anymore!
I'm quite happy with Microsoft, and if supporting GitHub "feeds the beast", then I'm likewise happy to see that beast well-fed.
That might be because we (at least on sites like this) are being overzealous about security. Or it might mean that in the modern environment, security is more of the concerns than in the halcyon days of MS-DOS.
The security paranoid experts and FOSS zealots have always thought for some inexplicable reasons that if you can download a source and build the program yourself, then it's safe.
Code signing is something you can do on both open-source or closed-source, but it doesn't prove anything other than that a particular build was made by a certain person.
But that's what trust actually is. This IRL person or identity, that I trust, vouches for the non-maliciousness of this application.
I think only a loud (very small) minority think that. The rest of us know that's silly, and bringing it up to prove some point against "FOSS zealots" is also silly. FOSS does allow for independent code reviews (which do happen on some projects), but that's not the only reason FOSS > proprietary crap.
Your use of "crap" to describe proprietary software betrays your bias. It's dangerous to be emotional when we're talking about security, it's important to remain objective and data driven.
Not all proprietary software is crap and not all open source is safe. It's not that uncommon that companies whose very livelihood depends on their code being secure invest much more time and money auditing it, while open source is often a lot more lenient, because there's not real accountability nor negative effects for shipping insecure software.
I'm not sure either is that much better at this point!
Any scum can pay money to get a certificate; you don't have to pass an ethics examination.
It is just a racket.
I think I like the old way better.
Your old way still exists. Approximately all real world users demonstrably prefer the new way, but if you're set on your old way you can still write software for macOS and Windows. It will pop a warning, but you can make it run anyway. Give your users instructions for bypassing those warnings. If you trust them to manually verify your software then you trust them to follow those directions.
If your selling software, this probably isn't a good strategy. However, I consider it perfectly reasonable for software that's being created in one's spare time and offered for free.
(I think the current situation we have on Windows / macOS is a perfectly fine balance, btw—except for drivers, where installing unsigned copies is way too difficult)
Perception is everything in those first few moments a user get his hands on your software. But now if I dont pay the entrance fee I potentially get abandoned by said user bc they have been scared off.
Its not an app store yet. Give it 5 or 10 years when we no longer have an open web and all we have are walled gardens.
You downloaded it to an isolated environment and ran it and proved it didn't cause unexpected side effects. It was run behind a firewall that could log internet communication. If it proved to be good, you ran it in your main environment. If it proved to be bad you warned everyone who would listen to you.
I feel like the more "social" the web appears to be becoming, the less social it actually is. We had tight little BBS and IRC communities where this stuff was all discussed. Social meant we actually had meaningful conversations... okay, not always meaningful, but it was often about the pursuit of something useful.
It's nigh on impossible with the proliferation of the internet to take this approach these days. Things just didn't scale that well. This is why things like DD-WRT and other hacker sites while being more accessible are still very much a niche market.
It's funny how much more technical you had to be back then just to get online. It's as if we're driving the automatic version of the internet now. Back then we had cobble all the pieces together and even when you had, it was like driving with a manual gearshift.
and convinced yourself it didn't cause unexpected side effects.
Put a one month timer in your malware, and this would get past many of such attempts to ‘prove’ the software isn’t evil.
Add a few countermeasures against clock shifts (e.g. only be active a few minutes each month or only activate when a) enough time and b) enough user interactions have passed from the first run), and you’ll effectively get past most, if not all, of such black-box testing exercises.
To be fair, with virus proliferation in today's age, we were still more successful with the remedial precautions back then than seems to be the case today.
Are you asking if software existed before code signing? Yes, it did.
IMO he was clearly reffering to 'thing' being 'an effective thing' for preventing viruses, as opposed to him asking if people downloaded software before code signing.
Of course not, the person who wrote that has absolutely no concept of the modern Internet user, and probably thinks themselves materially better than them.
If your security model is "do your research" then you're going to fail.
Perhaps it's 'safer' but now we all ask permission and pay a huge tax to release software.
Is this a better system?
The web is alive but under attack. What freedoms will devs have left?
We will either pay comcast or we will pay google and apple. Either way the entrance fee for development just became a lot higher.
I’ve been using personal computers for over 30 years and I don’t remember those days.
I remember that 99/100 users didn’t think before installing a free screensaver and got hit with some kind of malware.
While I disagree with how centralized this practice is I’m applauding that an effort is being made to keep malware off consumer devices.
What we need is a decentralized and self policing ecosystem. A Wikipedia of validation.
Something that’s truly monumenta to pull together.
I understand why so many vendors have centralized their efforts.
These are public entities.
We must now contend with these power brokers and ask their permission for every install.
With attacks getting more and more sophisticated just downloading from the legitimate site no longer guarantees integrity. Sometimes not even having the exe signed does it, with supply chain attacks and code injected long before being signed.
Perhaps also posting the hash on a public website (a tweet for example) 24h before making the installer available would mitigate the code signing issue. You could use it as an authoritative source that's less likely to be breached/modified after posting without immediately raising flags. And if it was an unauthorized post the 24h also gives the author enough time to flag it. Supply chain attacks are not considered here.
The app stores do not offer any kind of information that would help to put my trust
in the developer and application.
Certifactes are a fallacy that tell nothing about the security of the app. They just
tell that the app was signed someone that has a security certificate.
Also why do all security experts want to shove their security agenda to the world ?
If you are dealing with life or death matter than it is probably better not to expose
your world to the wild.
For everything else it is OK to live without the safety net, it is OK if someone
looses some money or your mum's disk gets encrypted. People fall every day and I
sencerly hope that there will not be some police that will force you to wear a helmet
ass soon as you get out of the bed.
The article is about signing the Windows binary that you download from the developer's site so when the UAC dialog pops up on installation it shows a valid publisher. Nothing to do with an app store.
> The app stores
Again, different topic.
> Certifactes are a fallacy that tell nothing about the security of the app.
Actually they do. It's a 0 effort way for the user to tell that the installer was not modified between compilation and installation. How do you tell that the developer's site wasn't hacked and the installer doesn't have some bundled malware?
> Also why do all security experts want to shove their security agenda to the world ?
Because that 1 million computer botnet attacking your site may just be made up of a lot of people who don't know what certificates are for, how to check if a download is legit, etc.
A blockchain could be used for that. When you publish something, take its hash, and the add it to a public ledger.
I suggested the disseminated hash method because it would work but most users won't bother checking it. Add blockchain in that and you've lost them completely. Unless you have a 1-click way of checking, something built into the OS ideally, it will only be used by the more tech savvy users.
That's also zero effort, though.
I don't care if users aren't checking; I published the hash, so I'm covered. I am not liable for the behavior of random materials, even if they happen to be tampered versions of something I produced.
Even if you make a signed and certified installer, someone can turn it into a malicious unsigned one and people will install anyway. They will click through the UAC and that's that.
There is no benefit in the signing when the genuine program is being installed; there is no attack going on that the signing is protecting against. It's supposed to stop a counterfeit program.
If the goal is zero effort on the part of the user, then the scheme is doomed. The zero-effort user takes no interest in signing; he or she doesn't wonder "how come this dialog is coming up", they just click through it.
Someone who counterfeits programs and adds malware can evne do one better: they can wrap the program in their own installer which has a valid certificate of their own.
The whole thing is a racket. You can't trust an artifact just because it was signed by someone whose only virtue was that they forked out $$$ for a certificate. Everyone has money, from heinous scoundrel to sparkling saint.
Your argument is that if a mechanism isn't perfect we shouldn't be using it at all. Well seatbelts and airbags also don't guarantee anything but still save people. Security is additive, no layer is perfect but put enough of them together and you can be relatively safe.
No IT system is perfectly secure. None. They can and will be hacked. The point is to raise the bar for that hack. You lock your house, your phone, put passwords on account, etc. even if they can all be broken into. Why do you do that?
You still haven’t mentioned the method you find acceptable. Even what I suggested originally (distributed hashes) can be bypassed. A determined attacker will impersonate the author, will take over the blockchain, will replace enough instances of the hash in the wild to make it look legit, etc. so you complicated everything without guaranteeing anything.
If you indeed support the idea that the cert/signature system is not perfect so it’s useless I’m sure you can mention one that offers guarantees. You’re arguing for the sake of arguing but brought no argument you can support. Have at it.
The sad thing is that this is still as important to do as it ever was -- but fewer people do it.
The Linux kernel, as of today, has ~20 million lines of code, and hundreds of vulnerabilities.
Since I presume that you are a user of Linux - what kind of research do you, personally do, to identify, and protect yourself from the potential dangers of running your operating system? Have you read through all of those 20 million LOC, and understood them in context? Do you compile your kernel from scratch? Do you completely, and fully understand the machine code that your C compiler will transform them into?
This is all in stark contrast to macOS's System Integrity Protection, which I can turn off once to never be bothered again.
I understand why Microsoft would enforce higher standards on drivers which can touch the kernel. But, the same fundamental problem applies: it isn't reasonable for non-profit, open source developers—many of whom I consider perfectly trustworthy—to pay hundreds of dollars for a certificate! Let me make the final decision about who I trust. It's my machine—I even built it myself!
The primary place I run into this problem is with drivers to support weird video game controllers.
† You can enable a "testsigning" mode via the command line which persists across reboots, but this only seems to work for certain drivers. If anyone can explain why it sometimes works, I'd appreciate it, as my research has never turned up anything.
The plethora of support emails is what motivated me to get one in the first place. I used to get accused of giving users a "virus" and getting into infinite loops on why they should trust me. I'm sure I was wasting more than $100/year of my time responding to these emails, so I just gave in and got one.
Now, I don't know what to do.
What does code signing in Windows actually verify? That executable's author at some point paid money to some company that Microsoft deemed an "authority"?
It's a rotten system. The whole CA pyramid is bullshit.
What we really need is a way to know that executable notepad++2.0 is signed by the same person who signed notepad++1.0 already installed on your computer, and that it's the same person who controls notepad-plus-plus.org, and that this identity has existed for well over 10 years. This is legitimately useful info that would allow people to make more informed decisions about what to install.
BTW, the part about historic record seems like one of the few good uses for blockchain technology.
Have you used Let's Encrypt? It verifies that you own the domain in question. HTTPS requires that the server you're connecting to has been identified.
"The third key weakness in the code-signing ecosystem was the failure of certificate authorities to verify the identities of people applying for code-signing certificates. Twenty-seven certificates in the group of 111 misappropriated certificates that the researchers identified fell into this class. Twenty-two of the certificates were improperly issued as a result of identity theft of a legitimate company. In some cases, malicious actors impersonated legitimate companies, in some cases ones that had no involvement at all in publishing software. In the remaining five cases, the certificates were issued to fraudulent shell companies. "
They're checking organisational or individual identity, which is a work intensive process (e.g. "email me your driver's license, business license, and tax return so I can manually review them.")
It might be possible for a charity to run a FOSS code signing CA, but it is unclear who's paying for that since it needs actual staff.
edit: updated price
Digicert lists EV code signing certs as $664/yr. But if you are to enter their site through a side door or just plainly cry into the support's jacket, then the price magically drops to $104/yr. And that's for an EV cert! So the only reason there are $600 certs is that there are people who do pay that.
I think the issue most people have with the price is that on iOS there's way more limits on running locally without a cert (7 days max), but for someone who is a practicing developer it's really a pretty low cost. If only it came with a mac to develop on...
Hah! I don't know about you but when I first started app development I didn't even have a credit card, let alone $99 on it to spend. Thankfully I could install my apps on android for free.
• The registration process is painfully manual (including e-mailing scanned documents). It's like an "Enterprise" CA from before Let's Encrypt.
• The website wouldn't send the final cert to any browser other than Internet Explorer.
• Microsoft's signing tools are a hot garbage. All options default to "subtly wrong". To get a working signature you need a half dozen flags in an exact order, different from what the official documentation uses.
• Microsoft's docs are either: a) plentiful but only tangentially related vague introduction, or b) scraps of incomplete technical information, mainly for Windows XP only. It's as if they've tried to improve it many times, but every time gave up after rewriting the first chapter.
• Signing can't be automated or used remotely, because its weirdo software wants a PIN entered from the local keyboard.
Follow the steps under “Buy a DigiCert EV code signing certificate“.
You’re welcome. ;)
Anyway if you really want to get technical, if your sharing method is hashing identical pages, I would bet that you don't actually need the same version of the lib to get nonzero sharing. I would not be surprised if many libs have large ranges of identical pages across point releases. Then it's more a question of: was the same compiler used, did it optimize the same way.
The additional constraint of page boundaries will kill a lot of this. But I have literally seen before my eyes, in the form of looking at binary diffs, that library changes wind up a lot smaller than we expect. So I do think it is possible.
What really surprises me is that the author of something as great as Notepad++ isn't making enough money from the project to easily be able to pay for the certificate.
> However I cannot use "Notepad++" as CN to sign because Notepad++ doesn’t exist as company or organization
CAs would put author's name as CN, which isn't great, especially for collaborative project.
OK I'll just buy an LLC from a state that's cheap (never mind the paperwork) but that's no good either because the new entity had no listed phone number...
When I bought code signing certificate for my LLC, in their infinite wisdom CA put "Spółka z ograniczoną odpowiedzialnością" as CN, because that's what they saw on proof of ownership. "Spółka z ograniczoną odpowiedzialnością" literally means "Limited liability company" in Polish.
Edit: downvoted? The project is GPL, not a revenue source.