Hacker News new | past | comments | ask | show | jobs | submit login
Notepad++ drops code signing for its releases (notepad-plus-plus.org)
496 points by pmh 14 days ago | hide | past | web | favorite | 326 comments



Windows signing is a ripoff, $500/year you're getting nothing. Your certificate is not trusted. You have to "get reputation for it" before Windows Defender would stop giving users warnings. Also, renewing certificate is not a thing. Every time you have to get a new one, with same story of "reputation" again.

[1] https://www.digicert.com/order/order-1.php


Funny thing about trust: I trust a developer who drops some $$$ on a code-signing certificate more than I trust a developer who doesn't. Even if it's just $20.

Also, the validation requirements to obtain a code-signing certificate, while certainly not bulletproof, are not nothing: you need to send in articles of incorporation and your business needs a listing with a physical address and phone number in a public directory (e.g., bbb.org), and someone representing your business needs to pick up that phone when the cert validator calls it.

Your business name and physical address are injected into the certificate. Basically code-signing certificates make it easier for people to find you and sue you if they truly want to. I suspect that's the whole point.

The problem here is that the Notepad++ developer wants his certificate to say CN=Notepad++, but he won't be able to obtain that until he has some kind of business or organization registered in his jurisdiction with that name. Whereas CN=FIRSTNAME LASTNAME he could probably obtain immediately (just send in his driver's license during validation).


> Funny thing about trust: I trust a developer who drops some $$$ on a code-signing certificate more than I trust a developer who doesn't. Even if it's just $20.

Why? If I expect to make four figures on spreading malware/adware, and I can assuage the nerves of people like you by spending two or three figures on a certificate, I'm going to buy the certificate and make it look all nice and pretty and take your money.

> Your business name and physical address are injected into the certificate. Basically code-signing certificates make it easier for people to find you and sue you if they truly want to. I suspect that's the whole point.

So I have to incorporate in Delaware, make up a fake address, and rent a burner phone for a while. I'm not seeing the downside.

https://www.bizfilings.com/toolkit/research-topics/incorpora...

> Delaware does not require director names and addresses to be listed in the Certificate of Incorporation.


They didn't say they trust the developer who spends the money absolutely, they said they trust the developer who spends the money more than they trust one who doesn't. Which is fair -- as you note, not every scammer will be scared off by the need to spend some money to pull the scam off; but some will, so the ratio of legitimate developers to illegitimate ones will be higher in markets where there's some cost to entry.


This is fallacious reasoning. A well intentioned open source developer who does not earn any money out of a labor of love has no incentive to further spend money to sign his app that he’s giving away for free anyway. On the flip side, a malicious actor that expects to earn money through a scam has every incentive to spend some money making the app look legit, especially if there is no risk involved.


The signed app can be globally disabled.


This could be easily achieved by Microsoft running a free signing service. Lowering the cost of signing to zero would significantly increase the proportion of signed apps.

The question was 'is someone who spends money for code signing more trustworthy than someone who doesn't' and it was being treated as if the trust or at least, increase in comfort, somehow comes merely from the act of spending money. It's an opt-in to a service that mitigates the impact of malicious code.

The parent statement was that having signed apps made them easy to disable. If all apps had to be signed, everything would have a reputation hook, and also be easily disabled. It's the hang-up of using the for profit 'verified' code signing ecosystem that makes signing ineffective.

Of course, MSFT/Apple etc will abuse it to kill apps they/govt don't like.


I don't really understand how any of this makes signing ineffective.

If the only way to play is to go through entrenched gatekeepers, who watches the watchers, hmmm? If anything this should be seen as a power grab by entrenched interests to have a cryptographic lever to pull to shut people out of what should be a user's discretion decision pre-emptively. Walled gardening at it's finest.

Code signing is a bit like gun control. It really doesn't solve the problem at all. It just pushes it up a level, and makes things more difficult for legitimate users.

It also lines up incentives such that the preferred model of software distribution shifts in the grand scheme of things toward for profit code.

While code signing is a neat technical solution, it's still a technical solution parading about as a solution to a social problem. And the social problem it is a solution to (that of untrustworthy folks existing) is not in any way mitigated by the act of signing as mentioned previously.


I don't really understand how any of this makes signing ineffective.

Apps submitted to the Microsoft Store are signed by Microsoft (only), iirc.

Although it costs $19 or $99 to sign up, one time.


That is partly because only those apps can run on Windows S (also available in 1703: https://www.howtogeek.com/302352/how-to-allow-only-apps-from...)

They could enable this for win32 apps but they want to push things towards the walled garden.


> The signed app can be globally disabled.

Why can't the unsigned app be globally disabled? Is this not the basic premise behind Windows Defender and every other antivirus?


Well sure, a known-malicious app will be detected by Windows Defender, provided it has updates making it aware of the app. But a known-malicious signed app will also fail the code signature verification, in addition to the virus scan, if its certificate has been revoked.

In what way is this not two separate things uselessly duplicating the same functionality? If you can get a CRL you can get a definition update, and they both effectively do the same thing.

They are very much not the same thing. A signed app can be distributed from anywhere with the assurance it's the same app - it can't be maliciousified and if it was malicious from the start, it can be disabled. The non-signed up can have zillions of malicious variants which something like Defender may or may not catch. It also gets a shot of circumventing (or even exploiting) AV.

Also, the "disable" in the signed case is much more powerful, since it disables all apps signed by the same key.

> A signed app can be distributed from anywhere with the assurance it's the same app - it can't be maliciousified

This is only true if there is some trust in what is signing them. If anyone can get one then anyone can sign the malicious version of the app with their own key, or one they stole from someone else. The user doesn't know who is supposed to be signing the app -- and if they did then you could be using TOFU or importing the expected source's key from a trusted channel without having to pay fees to anyone.

> and if it was malicious from the start, it can be disabled.

In the same way that Defender can block it. Then the attacker makes a new version signed with a different key.

The problem with CA-based signing is that it's a garbage trade off. If you make it easy to get a signing key, the attacker can easily get more and it does nothing. If you make it hard, you're kicking small developers in the teeth.

> The non-signed up can have zillions of malicious variants which something like Defender may or may not catch.

Which is still possible with code signing. The attacker gets their own key, uses it to infect many users, then some of those users are developers with their own signing keys and the attacker can use each of those keys to infect even more people and get even more keys.

Using keys as a rate limiter doesn't really work when one key can get you many more.

> It also gets a shot of circumventing (or even exploiting) AV.

As opposed to a shot at exploiting the signature verification method and the AV.

There is a better version of this that don't require expensive code signing certificates. You have the developer host their code signing key(s) on their website, served over HTTPS. Then the name displayed in the "do you trust them" box is the name of the website -- which is what the user is likely more familiar with anyway. If the program is signed by a key served on the website, and the user trusts the website, then you're done.

The application itself can still be obtained from another source, only the key has to be from the developer's website. Then future versions of the software signed with the same key can be trusted, but compromised keys can be revoked (and then replacements obtained from the website again).

This is better in every way than paying for EV certificates. It doesn't cost the developer anything, because they already have a domain (and if not they're very inexpensive and independently useful). But the attacker can't just register thousands of garbage domains because they're displayed to the user and nobody is going to trust "jdyfihjasdfhjkas.ru" or in principle anything other than the known developer's actual website, which the user is more likely to actually be familiar with than the legal name of the developer or their company.


I think if you don't like code signing for ideological/process reasons, you can argue that, preferably in reply to someone who wants to argue about it. But trying to work backwards from there to technical arguments that show how signing is the same thing as AV is futile, it just makes you type up longer versions of obviously technically inaccurate things.

There are good ideological reasons to not like code signing. But people present technical arguments in favor of it, which then need to be addressed so that people don't erroneously find them convincing.

And the technical arguments in favor of code signing are weak. They started off claiming a major benefit -- globally disable malicious code. Except that AV can do that too. The argument in favor of having code signing on top of that then becomes weaker -- AV can stop identified malicious code but it can't stop other malicious code from the same malware author. Except that code signing can't do that either since the malware author can sign other versions with different keys. So then the argument becomes, well, at least it rate limits how many different versions there are. Except that is only meaningful to the extent that getting a new key is arduous and not a lot of people have them, otherwise the attacker can get arbitrarily many more by either just applying for more under false identities or by compromising a moderate number of machines to capture more keys from the large number of people who have them. Moreover, using domain validation would already capture the case where you want to get the incremental benefit achievable from a minimal imposition on the developer.

Meanwhile the process of obtaining a code signing key has to be sufficiently easy and non-exclusive that even individual developers can reasonably do it, so making it purposely more arduous than that is a directly conflicting requirement.

The explanation is long because the details are relevant, not because anything "obviously technically inaccurate" is there.


Revoking a certificate removes the ability to sign the malicious executable and any future executables.

Blocking a specific executables block that one. Depending on AV used, simply rebuilding may get you through (different hash); some trivial modifications will do.


> Which is fair -- as you note, not every scammer will be scared off by the need to spend some money to pull the scam off; but some will, so the ratio of legitimate developers to illegitimate ones will be higher in markets where there's some cost to entry.

The big scammers, the ones who are the most likely to have an actual business plan for selling my information, aren't.

Note that I know Adobe is "trusted" in the relevant sense.

Note that I don't think Adobe is trustworthy in any real sense.


The good scammers absolutely will not be scared off by the need to pay a penny to steal a dollar. You have to buy a cheap watch/violin/purse if you want to pass it off as an expensive one. You have to pay off in the back of the operation if you want to keep cash coming in through the front. Indeed, one of the easy ways to short-circuit human trust defenses is to make a show of trust first, such as by placing personal assets at risk. "Here, I'll trust you to hold my wallet full of $500 cash, while I drive your expensive late-model car--that's worth even more when shipped to mainland China--to go get help. You know I'm coming back, because $500 is a lot of money."

The scheme shifts the need to trust from Random Q. Hacker to the certificate-issuing authority, and that only helps if the authority is more trustworthy than the individual. If they don't put forth an effort to really dig in to those applying for certificates, they're just selling costumes for the security theater.

I trust Microsoft more than someone I have never heard of, but I don't inherently trust them more than the informal assembly of Notepad++ contributors and lead FOSS developer Don Ho. If Microsoft's code-signing certificate validation process is not capable of recognizing organizations that are not formally incorporated, and allowing them to use the name of their brand, rather than the names of their lead developers or maintainers, they are leaving a huge fraction of my installs hanging in the wind.


I don't disagree with your reasoning. But: I posit there are fewer "good scammers" than "scammers." Added friction probably reduces the total number of active scammers.

Doesn't that just clear out the low-quality, low-effort competition for the better scammers? And create a stronger presumption that any given person is not a scammer, because otherwise Apple/Microsoft/Google/Amazon/whomever would have kicked them out?

People might forget that "caveat emptor" still applies, even in a walled garden.


The huge number of fraudulent and malware-ish apps for android vs iOS does suggest that costs reduce the number of low quality attackers. I guess that is good for protecting the naive user. But I'm more concerned about protecting against the threat that will take your whole digital identity.

> not every scammer will be scared off by the need to spend some money to pull the scam off; but some will

The same goes for developers, as you can see in this article.


It is not fair. The same cost in money does not translate to the same cost in efforts to earn trust. This is structural discrimination.


You are not necessarily in disagreement with the poster you responded to. They meant it was fair in the sense that the user is using a fair heuristic to determine how likely they are to be scammed. You're saying it's unfair to the person who made the software, as they are experiencing structural discrimination. Those two points are not mutually exclusive.

So the person who spends the most amount of money is the one you trust the most? That's some interesting reasoning. Does that mean if I spend $21 you trust me more than the guy who spends $20?

I would imagine barrier to entry plays a large role in the sense of comfort here. If you have to incorporate in Delaware and pay $500 and jump through hoops, you’re more likely to turn to a simpler and easier alternative.


Exactly! I trust "you must do some things to obtain this" more than I trust "you don't have to do anything".


The part you're missing is that your delaware company burner phone certificate investments can be invalidated in two seconds by the certificate's trust getting revoked.

The part you're missing is that they won't be in practice. There's malware on virustotal with (still) valid code-signing certs.

I mean, if someone is going to commit a crime, forcing them to leave a paper trail is going to scare at least some of them off. And if I'm installing Adobe Photoshop, and the cert comes from Bob's Software Emporium, Delaware, it raises questions.

Is that actually true? How many people, when installing Photoshop, actually look at who issued the cert?

In Windows, the name of the publisher in the cert shows up on the UAC prompt whenever the program asks for elevated privileges. That's the point of this whole thread -- the author isn't paying for a cert because he can't make the UAC prompt say Notepad++ instead of his real name (which, he could, and I have no idea why he thinks it's so complicated, but there it is).

That doesn't have anything to do with the question I asked. See also, TLS exceptions.

> I trust a developer who drops some $$$ on a code-signing certificate more than I trust a developer who doesn't.

This is a misapplication of Bayes theorem.

  P(bad) is the probability any app is bad
  P(signed) is the probability any app is signed

  P(bad if signed) = P(signed if bad) * P(bad) / P(signed)
Essentially your trust model requires that "the fraction of bad apps that are signed is small", or P(signed if bad) approaches 0. But signed malware is available - famously stuxnet, but others before and since: http://users.umiacs.umd.edu/~tdumitra/papers/CCS-2017.pdf

Malware authors have incentive to make their apps appear legitimate, either by stealing keys, impersonating companies, or other mechanisms. Signing also helps get past automated checks (per the paper above).

Further, those probabilities assume random distribution, but I'd suggest that really expensive/dangerous malware has greater incentives to appear safe, so it is even more likely to be signed, even if most malware is not. Stuxnet would be a case in point - high value, sophisticated malware, signed.

  P(really bad if signed) = P(signed if really bad) * P(really bad) / P(signed)

  P(really bad) is lower, 
  but P(signed if really bad) approaches 1, 
  so P(really bad if signed) approaches P(really bad)
Meaning the worse the malware, the less the signature tells you.

All the GP is saying is that he believes that P(signed if bad) < P(signed).

It's a fair belief because paying for something leaves a paper trail. MS certificates are a farce at $500/year, but Google's $20 once is a very reasonable thing.

Your point that really bad malware has higher odds of being signed is a good one, but really bad malware is much less likely than simply "malware".


I believe he's saying P(bad if signed) < P(bad if not signed) - "I trust signed apps more than unsigned apps". I'm saying, maybe, but I'm not sure, and Cost * P(bad if signed) may be much worse than Cost * P(bad if not signed).

When the Transmission bittorrent client site was hacked to distribute ransomware, it was signed using an unrelated certificate that was likely stolen. This happened twice within a year, with different valid (stolen) certificates:

https://blog.malwarebytes.com/threat-analysis/2016/09/transm...

Stuxnet certificates were also stolen.

This disproves the GGP's premise that a signed app implies the developer paid for it, as well as your assumption that the paper trail for legally acquiring a certificate is an impediment to signing malware.

You're not only trusting the developer who purchased the certificate and the CA that granted the certificate, but also trusting the ongoing security of everybody else who has purchased a trusted certificate. That's a pretty open circle of trust.

Certificate revocation can limit the time of exposure once malware is distributed, but it isn't always implemented.

https://arstechnica.com/information-technology/2017/11/evasi...

"they found 189 malware samples bearing valid digital signatures that were created using compromised certificates issued by recognized certificate authorities and used to sign legitimate software. In total, 109 of those abused certificates remain valid."


Well I don't care if the developer payed the certificate, and I don't see why someone that develops FOSS should pay money for something that doesn't bring to him any of that money back. At least for open source software certificates should be offered for free, in my opinion.

Also the fact that you are required to have a corporation, why? If I develop again an open source software I need to register a corporation just to deploy that software on Windows the correct way? That is total bullshit for me, just let me sign my software like is done on other platforms for nothing or a very small fee and done.


Certum offers cheap code signing certificates for open source developers. I don't know how "good" they are, though, just stumbled across them a while ago because MPC-BE is using one.

When we were getting EV certs from Digicert, somebody called up the office to confirm the name of the CEO. He answered the phone and the person told him he couldn't validate his own identity. So he passed the phone to the person sitting next to him, she said "Oh yeah this totally him sitting next to me" and we got our EV cert within the hour.

> Even if it's just $20.

$20 doesn't get you an EV certificate anywhere.

We're talking about non-trivial hundreds of dollars per year, which is completely unsustainable for an open source driver for example.


My point was I would trust someone who drops $20 more than I would trust someone who drops nothing.

$61/yr (USD) will get you an OV cert - there are 10% discount codes that are easy to find for these guys, and their list price is $67/yr:

https://codesigning.ksoftware.net/

But the Notepad++ guy will need a business registered with that name before he can obtain CN=Notepad++, no matter how much he's willing to pay.


This certificate doesn't help to bypass UAC and "unsecure" prompt still be shown to the user.


That's incorrect. All code authenticode signing certificates (trusted by microsoft) turns the UAC prompt from yellow to blue. EV certificates is probably for auto-trust for smartscreen.


> EV certificates is probably for auto-trust for smartscreen.

Yes, that's correct. EV just skips the reputation building phase.

Source: https://blogs.msdn.microsoft.com/ie/2012/08/14/microsoft-sma...

> Programs signed by an EV code signing certificate can immediately establish reputation with SmartScreen reputation services even if no prior reputation exists for that file or publisher.


Only after you noticed that UAC color is blue instead of yellow i started to see difference.

No user will understand this though.


Up until reading this thread I wasn't even aware that there were different UAC colours.

Your trust is misplaced; a developer who drops $$$ on a certificate could be a dyed-in-the-wool criminal. Just because code is signed and certified doesn't mean it doesn't do anything bad.

Signing and certificates revolve around trust/mistrust in the delivery channel not in the purveyor.

That problem can be solved with other tools, like PGP. You don't have to be blackmailed by a platform's certificate racket.


> That problem can be solved with other tools, like PGP. You don't have to be blackmailed by a platform's certificate racket.

It kind of works that way in Linux world where artifacts are PGP signed and to get your key into distro store one has to have "reputation". With the caveat that different distros have different schemes.

X.509 used by Windows has two nice properties that PGP doesn't - certificate attestation (MS can be sure your private key is on a hardware token) and timestamping (even if the cert expires if the signature has a timestamp it's still valid).


> It kind of works that way in Linux world where artifacts are PGP signed and to get your key into distro store one has to have "reputation". With the caveat that different distros have different schemes.

... none of them financial.

I'm not saying that financial incentives are bad, necessarily, but I am saying that being able/forced to buy your way in privileges the most organized scammers, the ones who have a cogent business plan to make money from their chicanery and some seed capital, over programmers who don't have money, have no expectation of making money, and are only motivated by getting their code out there and used.

Debian has a Social Contract. Microsoft has a pricetag. I know which of them Adobe is more comfortable with.


> I am saying that being able/forced to buy your way in privileges the most organized scammers

This works both ways because legitimate software developers also don't have easy ways of pushing their signed software to end users. Usually step 1 in installing software from external developer is "get my PGP key imported" [0].

[0]: https://www.sublimemerge.com/docs/linux_repositories

I don't mean Linux distro's model is worse or that Windows model is better. What I mean is that none of them is significantly better than the other. Just different with different trade-offs.


> Usually step 1 in installing software from external developer is "get my PGP key imported" [0].

Even #%@! Oracle does it:

https://www.virtualbox.org/wiki/Linux_Downloads

I wonder what’s the point of the PGP key then.


> I wonder what’s the point of the PGP key then.

Trust on First Use. Once the key is imported it stays the same.

People working for that organization can sign the key to attest it's real (Web of Trust). Although I wonder how would they check it. Organization (non-individual) keys are weird because ultimately it's just an individual behind it.


Kinda. You can use mimikatz to override the checks that the private key is isolated, you can even override 'no export' flag. Timestamping relies on external trusted timestamp providers implementing RFC 3161. There are many out there, maybe you could get a false timestamp out of them. I agree could be stronger than PGP, however it suffers a design flaw in that it considers the geometry of the PE file. PGP signs the whole blob. CVE-2017-0215 is an example of bypass by copying a previously signed header. It is more fragile and has been bypassed historically.


> You can use mimikatz to override the checks that the private key is isolated, you can even override 'no export' flag.

"No export" flag is not the same. What I'm talking about is keys stored in hardware modules (TPM, Yubikey) so that the private key is never disclosed, you can only ask the hardware to perform actions using that key.

See for example Yubikey docs: https://developers.yubico.com/PIV/Introduction/PIV_attestati...

> There are many out there, maybe you could get a false timestamp out of them.

Maybe? That's how CA model works, they are trusted third parties. Code signing CAs are required to operate timestamping services so it getting a cert from them is not a security issue, timestamping should also be fine.

PGP on the other hand if used in a Web of Trust model makes every valid key a CA. Not to mention that PGP doesn't have extended key usage flags so signing software is the same as signing e-mail (you cannot specify that you want to have this key be used for code signing exclusively).


> I trust a developer who drops some $$$ on a code-signing certificate more than I trust a developer who doesn't.

Why?

> you need to send in articles of incorporation and your business needs a listing with a physical address and phone number in a public directory (e.g., bbb.org), and someone representing your business needs to pick up that phone when the cert validator calls it.

So you only trust code that comes from businesses?


> Funny thing about trust: I trust a developer who drops some $$$ on a code-signing certificate more than I trust a developer who doesn't. Even if it's just $20.

Ever notice how most con-men wear nice suits? Your advocating for the digital equivalent.


Ever noticed how con-men in nice suit is much more convincing than the one in tracking suit?

To me it's rather the opposite. Especially for open source software. Spend your time, money and energy on improving the actual product instead of wasting it on smoke and mirrors.

You can file a "doing business as" (DBA) certificate online for under $10, at least in Texas.


> but he won't be able to obtain that until he has some kind of business or organization registered in his jurisdiction with that name.

Even in good old bureaucratic Germany this will take you less than an hour and cost you about 30€. Can't believe it can be much worse anywhere else.


>Funny thing about trust: I trust a developer who drops some $$$ on a code-signing certificate more than I trust a developer who doesn't.

I don't trust poor people either. Bigger chance that they're scaming .... because they need the money.


The flip side of this reasoning is that successful scammers aren't poor, but they are very likely to to still be scammers.

Wow.

You know our whole civilization is based on $$$ == trust. Just wait until you lend money to someone who is going to remind you every month he will pay you back because he is a good person, but you never see the money.

There's a backdoor that lets your bypass the SmartScreen reputation requirement: pay more money for an EV cert[1]. I don't agree with this industry practice.

Reputation requirements either shouldn't have backdoors or shouldn't exist in the first place.

1. https://twitter.com/JosephRyanRies/status/951643158118567937


The Reputation requirement exists simply because there's CAs in the Windows certificate store that aren't super trustworthy, and frankly that malware could seek to get a code signing certificate.

Arguably the Reputation requirement is more helpful than the information held in the certificate, since Reputation is hard to fake whereas that information is provided by the requestor and its validation depends on the CA's processes (which as I said varies wildly).

It is one of those "greater good" things. It does suck for FOSS however.


I'm not arguing against reputation requirements, I'm arguing for consistency.

EV certificates are literally a reputation requirement backdoor.

If EV-signed apps had to deal with the same SmartScreen reputation requirements as non-EV-signed apps, Microsoft might actually have to address this issue brought up in the parent comment:

> Every time you have to get a new one, with same story of "reputation" again.


I can not stick EV token into my cloud VPS build server.

EV code signing certs cost more because they require the private key to be stored exclusively on the hardware token so it's harder to misuse.


I have to say I agree here. Notepad++ if it provides hashes for the downloaded EXEs is completely in the right for not wanting to pay middle men for fancy "this is OK" screen on installation. That seems ridiculous and greedy.


Especially for a program that caters to developers. People will understand.

Corporate IT security departments may not, though. Certificates can be used as a secure way of managing AppLocker exemptions.

Wait are you saying that Apple Developer program for 99$/year is actually quite a good deal in comparison?

I will definitely pull this thread out next time someone complain that Apple is too expensive and that they are milking the poor developers...


For macOS for 99/year you're getting nice green pass on Gatekeeper.

For 400$/ year you're getting nothing. I was told here that I could go lower, but this is what I was paying for last decade.


> I will definitely pull this thread out next time someone complain that Apple is too expensive and that they are milking the poor developers...

Other companies also milking their developers does not invalidate this argument.


They're both too expensive?

(The only acceptable price is $0, IMO.)


You can deliver unsigned Windows apps.

You can deliver unsigned Mac apps too, just not on the MAS.

Running an unsigned app requires a separate option that's reasonably well-hidden, however [0], so it's difficult to avoid the signing requirement if you're going to distribute your app to a wide audience.

[0] https://www.macworld.com/article/3140183/how-to-install-an-a...


The are blocked by default from the latest two of three versions of macOS. You'll have to explain your users to right-click on the app icon and click Open to launch your app.

You can buy a Comodo Code Sign cert for $95 if you buy from a reseller rather than direct.

According to juliusmusseau's comment, you can even get it down to $61/year. (I've used KSoftware for signing certs before and would recommend them too.)

https://news.ycombinator.com/item?id=19330504


At least Windows lets you install pretty much whatever you want. None of this nonsense is mandatory.

Not from my experience. Many Windows users have anti-viruses installed, and they flag unsigned soft. I'm speaking from experience: it is cheaper to buy pricy certificate than loose big chunks of customers because of Windows Defender warnings. This is especially problem if you have software with rapid development cycle. Like I do - we got daily builds going up every couple days or so.

Also, Windows "trust system" seemingly works also on unsigned binaries. If you don't disable bunch of security settings, you would be blocked similar to self-signed certificates on HTTPS. You can go around, by clicking tiny button. Is enough users do it, software getting flagged as OK.

Certificate solves that this should be done per certificate, not per binary.


Shhh! Don't give them ideas! :)

You can get a certificate far cheaper than that - K-Software offer them for $85/year.

I've used them for years and can recommend them.


K-Software does not sell EV certificates for $85/yr. They start at $349/yr.

The parent comment's issue is that EV certificates are essentially required due to the poorly-designed SmartScreen reputation filter. The $85/yr certificate you're mentioning doesn't help solve this.


I didn't know about the EV workaround and the OP didn't mention it.

I've never used an EV cert before for code signing. When we first started, I think Smartscreen was a nuisance for about 2 weeks, but years on, and I've never had to think about it again. Even when we've renewed the cert.


Thanks for taking the time to describe your experience.

When you renewed the cert did you use the same key pair? (I'm wondering how does Microsoft correlate reputation).


No, it was a new key pair each time. I'm also rather interested to know how it decides reputation though!


Probably based on Subject name. It'd require a lot of money to verify that though (e.g. buying cert from another CA with the same name).


IME Windows SmartScreen still gives a scare-warning for software that's signed with a valid certificate, unless some magic "reputation-threshold" is reached and who-knows what factors into this.

The current code-signing-certificate model is pointless, regardless of price.


I didn't realise that. We released desktop software several years ago, and customers/trialers did report that Smartscreen was flagging it for a couple I'd weeks. Never had an issue since then though.

Having said that, Smartscreen is opaque, and a nuisance.


I recently got an $85 certificate from k-software which is actually Comodo now Sectigo. It was a nightmare. Took two months and fifty emails.

> Also, renewing certificate is not a thing.

Oh, no. We just kept renewing our EV certs with them for past several years... if only we'd known that we can't. Damn. Such an amateur shop this Digicert. Unacceptable.


You are getting new EV certificate every time.

I created a huge rant on code signing certificates here:

https://www.youtube.com/watch?v=mwuk0E-tfeg

It's a nightmare. Complete scam.

I needed this for Polar: https://getpolarized.io/

Mind you... it's Open Source but I still want my users to be able to download it without warnings.

No joke - it took me 2 weeks to get the CSC with about 4 hours per day working on just this CSC issue.

It's just a labyrinth of insanity from not having a listing on D&B to them insisting I pay $2k to expedite it.

I still don't have one from Apple because it requires a D&B number so I had to get a personal cert from them.

I went with a cheap one for Windows BUT it gives errors on install for like the first 1k downloads until Windows says it's legit.

It's a complete scam.

BTW.. if you get in the MS App Store you don't have to worry about a CSC so that's good I guess.


For those that don't know, D&B stands for Dun & Bradstreet (https://www.dnb.com/). They have this concept of a D-U-N-S Number which basically means information about your business is in their database.

Last I checked expedited D&B was around $40 USD (10 business days) and same-day D&B around $500 USD.

Free D&B said it would take 30 business days, but it actually only took them 5 business days when I applied for it.


Apple has a tool where you can lookup your DUNS number directly for free.

https://developer.apple.com/support/D-U-N-S/


If you sell software to the government, having a DUNS number is actually a requirement, too. At least to get listed on SAM.gov. You also need a CAGE number. I don't remember it taking me too long to get a DUNS number - and you can definitely avoid paying them any money.

There is also the issue that an EV cert has to live on a USB dongle and be "logged into" with some utility before being available for signing. Logging out the current user or even allowing the screen to sleep will lock the cert again. So, for automating signed builds, the only option is to leave the machine logged in and unlocked at all times, clearly obviating much of the "security" gained from all those restrictions.

I solved this fairly easily for a startup I worked at a few years ago that used a Digicert EV signing cert.

First, we ran Windows under Parallels on a Mac Mini (we needed the build machine to handle Mac builds as well). I think I set the Windows VM to never lock the screen or sign you out, but the physical Mac would lock its own screen as usual. You could set things up the same way with a VMware VM running on a Windows host.

Then the only problem to solve was how to type in the signing password every time the certificate utility popped up its password dialog.

Like so many things on Windows, it was AutoHotkey to the rescue! I wrote a little 5-10 line AutoHotkey script to watch for the password dialog opening, type in the password and hit Enter.

Bingo, we had fully automated EV code signing for our Windows builds.

You mentioned that logging out or letting the screen go to sleep would lock the cert. I don't quite remember it that way, but I could be remembering wrong. It seemed that the certificate utility simply wanted a password typed into its dialog for every code signing.

So it may be that this AutoHotkey setup would also work with the cert utility running on a physical Windows machine with normal screen locking. In any case, it definitely worked great in a VM.

Of course this meant that we had to store the signing password in plain text inside the VM, but that was a lesser evil than requiring someone to babysit the machine whenever we pushed a build.


Not sure how it managed to take you so long, but I do agree it's a PITA, and pure theatre.

I did need to get into D&B, and it was a bit of a faff - their website is a maze, and it took around a week after filling the form to get listed. Didn't need much time on it though.

One of the other requirements I had to fulfil was having a telephone number published in a sanctioned list of websites for a callback - so I registered a Skype number, published the number, did the callback, and terminated the number. Not sure what that was meant to prove...


There might be some risk to your business if a malicious person can get that number assigned to their phone since you're no longer using it.


Not sure I see how, but it was only listed for 24 hours in any case.

I remember the good old days when people were actually trusted to do their own research before downloading a potentially dangerous exe.

Now all we have are app store and certificate rackets. Im looking at Google and Apple too. Shame on the industry for accepting 30% revenue share on their services. The idea of an app store is great but not when it excludes other legitimate ways of installing software on device.

These practices are anticompetitive and monopolistic.

Good for Notepad++. I couldnt agree more with its sentiment.


>I remember the good old days when people were actually trusted to do their own research before downloading a potentially dangerous exe.

Is there any evidence that was ever really a thing / effective?

How could you possibly know?

There are plenty of examples of previously trustworthy software becoming untrustworthy, same with sites you download the code from.

That line reads like the absurd advice that security experts put out about "only download something you trust" and ignoring that nobody has a clue how to evaluate that aside form say limiting them self to FOSS and reading all the code...


> absurd advice that security experts put out about "only download something you trust"

This is mostly a meme from the overzealous FOSS and privacy crowd, not the security crowd. Professional security engineers do not, as a rule, encourage software engineers (or end users more generally) to only use open source software because "you can inspect the code for vulnerabilities."

Anyone with legitimate security expertise will understand the benefits of specialization and core competencies. Namely that despite the ideological perspective of many in the FOSS community, it is actually better to trust someone else with the security of your software. Because you most likely can't trust yourself with that task anyway.

The idea that most people can reliably identify security vulnerabilities in the software they use just because it's open source is laughable. They might find trivial low hanging fruit or obvious malicious activity, but they won't have a better picture of the overall security posture just because they can read the code.

As an obvious case in point, consider how few people identify vulnerabilities in Firefox versus how many people use Firefox. The people who write complex open source software don't even reliably find the issues in their own code.


Yeah I didn't intend to tie FOSS advice to the security comment but inadvertently my comment reads like that.

I intended it to just be a comparison of the "only download something you trust" absurd advice you get from "security" people you see on TV or something... and what the user was suggesting about the old days.

My FOSS comment was really meant to reflect the absolute rabbit hole you go down when it is suggested people can simply protect themselves. It's never ending flow of tasks and things you need to know that I don't think anyone can do ...


> Professional security engineers do not, as a rule, encourage software engineers [..] to only use open source software because "you can inspect the code for vulnerabilities."

All else being equal, I'd certainly have more trust in the FOSS version. Yes, i won't audit it myself, but source we can compile ourself is still easier to audit. As such I'd hope more people will have eyes on it than without source access.

Same for a hash of the executable I download being generated by an reproducible build. And I prefer to download and run the same installer as everyone else than someone offering a custom download link just for me.

Non of that means I don't have to trust the project/maintainer. But it is a bit of extra safety I want in some cases, e.g. for a password manager.


> Professional security engineers do not, as a rule, encourage software engineers (or end users more generally) to...

And many 'professional security engineers' ignore core security (e.g. auditing protocols, connections, user access, etc) to push AV software, 2fa tokens, NIDS, version/patchlevel compliance infrastructure, and other 'security tools', because this is easier and comes with 'vendor support'.

I don't argue that these tools are a 'meme' however simply because there are a few people that don't understand the whole picture.

I could just as easily argue:

"The idea that most people can reliably identify security vulnerabilities in the systems they use just because it's protected by vulnerability scanning tools is laughable. They might find trivial low hanging fruit or obvious malicious activity, but they won't have a better picture of the overall security posture just because they can read the audit report."


>>FOSS and reading all the code...

Don't forget, you have to compile from source as well. I'm thinking the parent you replied to forgot how awful sourceforge was, and even trustworthy projects could have garbage bundled in.


And compile your compiler, and ...

(In reference to the classic paper: https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p7...)


Also considering the layers of hardware with layers of firmware. ... I should probabbly learn something about that too....

I'm thinking along the lines of google's blog on Meltdown and Spectre recently that basically said if two things are on the same processor.... there's nothing you can do now security wise and be sure about it.


That's why there's been so much work on reproducible builds.

And also trust yourself to have a better idea than the devs do about security. Most flaws are accidents and accidents can be hard to catch.


Every good dev knows to stay the hell away from sourceforge!


Logan Abbott[0] bought SourceForge back in 2016 and has been cleaning it up ever since[1]:

> Hi, president of SourceForge here. Glad this is trending, albeit a few months later. These articles seem to trend on HN every few months, with many people not realizing SourceForge changed ownership in 2016 and that the new team's been working hard on improving. To be clear, we had nothing to do with the bundled adware decisions of 2015, and when we took over in 2016, the first thing we did was remove the bundled adware, as well as institute malware scans for every project on the site. We're working hard to restore trust, so if we win some of you back that would be cool. However, we're just focused on doing right by our million daily users.

[0] https://news.ycombinator.com/user?id=loganabbott

[1] https://news.ycombinator.com/item?id=17592523


Only if that's part of your definition of a good dev. I know plenty of good devs who downloaded software from Sourceforge back when it was big.

Let me guess: you also dislike GitHub because it's closed, and wish people would distribute software from their own, self-hosted git repositories?


Devs get better by learning from mistakes.

Sourceforge is a hostile source of malware : https://mail.gnome.org/archives/gimp-developer-list/2015-May...

You mock those who desire freedom at your own risk. Github is microsoft now, and supporting it feeds the beast.


was, not is. Sourceforge has changed hands since then.


Yeah my understanding is after it changed hands they did away with all the nasty stuff that happened in the past.


That was true back in 2015. New owners have been cleaning it up since 2016:

Under new management, SourceForge moves to put badness in past https://arstechnica.com/information-technology/2016/06/under...


> You mock those who desire freedom at your own risk. Github is microsoft now, and supporting it feeds the beast.

Oh no, it's the 90s and Micro$oft is evil! They're going to embrace, extend, extinguish all my open source software! I won't be free to develop software anymore!

I'm quite happy with Microsoft, and if supporting GitHub "feeds the beast", then I'm likewise happy to see that beast well-fed.


The existence of whole industrial and home computer industry kinda proves that the models were effective and worked. Yes, there were vulnerabilities and viruses but that did not stop the computer industry from growing and proving its worth.


This is true, but "worked" does mean it was secure by whatever we standard we consider good today.

That might be because we (at least on sites like this) are being overzealous about security. Or it might mean that in the modern environment, security is more of the concerns than in the halcyon days of MS-DOS.


Right on.

The security paranoid experts and FOSS zealots have always thought for some inexplicable reasons that if you can download a source and build the program yourself, then it's safe.


It doesn't make it inherently safe, but if you are attempting to prove your builds are safe then it is impossible for anyone else to verify that without the source. See the thread on Debian reproducible builds from earlier this week for more discussion on this topic: https://news.ycombinator.com/item?id=19310638

Code signing is something you can do on both open-source or closed-source, but it doesn't prove anything other than that a particular build was made by a certain person.


"but it doesn't prove anything other than that a particular build was made by a certain person."

But that's what trust actually is. This IRL person or identity, that I trust, vouches for the non-maliciousness of this application.


Except the core problem is key propagation because just anyone can have a key - paid or free if you don't know the source. It says it is from Globe Software and it matches with the provided key. It doesn't tell you if they really are Globe Software, let alone if they are a trustworthy company in the first place.

> if you can download a source and build the program yourself, then it's safe.

I think only a loud (very small) minority think that. The rest of us know that's silly, and bringing it up to prove some point against "FOSS zealots" is also silly. FOSS does allow for independent code reviews (which do happen on some projects), but that's not the only reason FOSS > proprietary crap.


> but that's not the only reason FOSS > proprietary crap.

Your use of "crap" to describe proprietary software betrays your bias. It's dangerous to be emotional when we're talking about security, it's important to remain objective and data driven.

Not all proprietary software is crap and not all open source is safe. It's not that uncommon that companies whose very livelihood depends on their code being secure invest much more time and money auditing it, while open source is often a lot more lenient, because there's not real accountability nor negative effects for shipping insecure software.


Eh, it's crap because you have to trust the company pushing it out, it's 100% impossible (by definition) to verify source yourself, at least not legally. You could trust an open source developer, but you could also have a look at the code yourself, or pay for an independent security review of it if your business depends on it.

If your compiler correctly enforces security properties (and many languages do attempt to do this, at least in well-defined "safe" subsets), then of course any source code compiled with it is going to be safe. It's the difference between the JavaScript/WASM JIT-powered sandbox on the one hand, and the ActiveX code-signing-based model on the other. Which is safer?


What? A compiler can't tell if a given program is "safe" any more than it can tell if the program will halt.


A compiler can tell if some program will halt, for many reasonable cases. For example, if the program comes with a proof that places it in some well-defined computational complexity class, a compiler will be able to tell that it halts. And even if a program cannot be said to halt in general, it can still be said to halt conditional on some "unsafe" assertions being true. Similarly, we can verify safety properties for many useful programs, and we can even use "unsafe" assertions to limit the extent of uncertainty about these properties that we cannot directly verify.


I can visit a website and my CPU can get hammered to mine cryptocurrency for someone else without my consent.

I'm not sure either is that much better at this point!


The problem is that the software itself is not trustworthy, even if signed, certified and delivered with perfect integrity.

Any scum can pay money to get a certificate; you don't have to pass an ethics examination.

It is just a racket.


Of course but all of the above mentioned can still happen and does happen today with all security measures in place. With the added benefit that we must ask permission of a private entity to release an app and we must pay 30% of our profits to the gatekeepers /s

I think I like the old way better.


You don't have to do anything you don't want to. No one is forcing you to develop for those platforms.

Your old way still exists. Approximately all real world users demonstrably prefer the new way, but if you're set on your old way you can still write software for macOS and Windows. It will pop a warning, but you can make it run anyway. Give your users instructions for bypassing those warnings. If you trust them to manually verify your software then you trust them to follow those directions.


> Give your users instructions for bypassing those warnings. If you trust them to manually verify your software then you trust them to follow those directions.

If your selling software, this probably isn't a good strategy. However, I consider it perfectly reasonable for software that's being created in one's spare time and offered for free.

(I think the current situation we have on Windows / macOS is a perfectly fine balance, btw—except for drivers, where installing unsigned copies is way too difficult)


I don't think I understand what the difference is with the "old way" if as you describe it ... there seems to be no difference. You don't have to pay some cert provider under the old or current way.


You do if you want your app not to be flagged as spam. Users are perceptive. They notice when windows brings up a big notice telling them the app is not to be trusted.

Perception is everything in those first few moments a user get his hands on your software. But now if I dont pay the entrance fee I potentially get abandoned by said user bc they have been scared off.

Its not an app store yet. Give it 5 or 10 years when we no longer have an open web and all we have are walled gardens.


Back in the day, you used to evaluate trust by making smarter decisions about how you went about installing things.

You downloaded it to an isolated environment and ran it and proved it didn't cause unexpected side effects. It was run behind a firewall that could log internet communication. If it proved to be good, you ran it in your main environment. If it proved to be bad you warned everyone who would listen to you.

I feel like the more "social" the web appears to be becoming, the less social it actually is. We had tight little BBS and IRC communities where this stuff was all discussed. Social meant we actually had meaningful conversations... okay, not always meaningful, but it was often about the pursuit of something useful.

It's nigh on impossible with the proliferation of the internet to take this approach these days. Things just didn't scale that well. This is why things like DD-WRT and other hacker sites while being more accessible are still very much a niche market.

It's funny how much more technical you had to be back then just to get online. It's as if we're driving the automatic version of the internet now. Back then we had cobble all the pieces together and even when you had, it was like driving with a manual gearshift.


”and proved it didn't cause unexpected side effects.”

and convinced yourself it didn't cause unexpected side effects.

Put a one month timer in your malware, and this would get past many of such attempts to ‘prove’ the software isn’t evil.

Add a few countermeasures against clock shifts (e.g. only be active a few minutes each month or only activate when a) enough time and b) enough user interactions have passed from the first run), and you’ll effectively get past most, if not all, of such black-box testing exercises.


> and convinced yourself it didn't cause unexpected side effects.

To be fair, with virus proliferation in today's age, we were still more successful with the remedial precautions back then than seems to be the case today.


> Is there any evidence that was ever really a thing

Are you asking if software existed before code signing? Yes, it did.


HN has a guideline that you should respond to the best interpretation of a comment, not the least favorable one.

IMO he was clearly reffering to 'thing' being 'an effective thing' for preventing viruses, as opposed to him asking if people downloaded software before code signing.


> Is there any evidence that was ever really a thing / effective?

Of course not, the person who wrote that has absolutely no concept of the modern Internet user, and probably thinks themselves materially better than them.


"the good old days", as in, "the days when the average person had no freaking clue if something was safe, but installed it anyway because it's completely unreasonable to expect them to do otherwise"?

If your security model is "do your research" then you're going to fail.


Just because someone certifies that it wasn't tampered with and comes from who it says it does doesn't mean it is safe.


A novel idea. Apple and googles own apps have been tampered with. Indeed there are compromised apps available for download now on both stores.

Perhaps it's 'safer' but now we all ask permission and pay a huge tax to release software.

Is this a better system?

The web is alive but under attack. What freedoms will devs have left?

We will either pay comcast or we will pay google and apple. Either way the entrance fee for development just became a lot higher.


Perfection isn't required. None of the systems we design are perfect. What's relevant is whether or not it's better for the users. In a direct way, for the average user, I don't think you can reasonably argue that it's not. There are of course other issues with walled gardens, but security isn't one of them.

Reminds me of the saying, “I would RTFM if there was a FM to R”


When were those days that people disassembled every executable and went through the assembly to make sure that the code was trustworthy?

I’ve been using personal computers for over 30 years and I don’t remember those days.


I am absolutely confident that the overwhelming majority of people have never evaluated the safety of an exe in their life and could not tell you how to do it even if they wanted to. Things that work for security-aware software engineers don't work for billions of people.


When were those days?

I remember that 99/100 users didn’t think before installing a free screensaver and got hit with some kind of malware.

While I disagree with how centralized this practice is I’m applauding that an effort is being made to keep malware off consumer devices.

What we need is a decentralized and self policing ecosystem. A Wikipedia of validation.

Something that’s truly monumenta to pull together.

I understand why so many vendors have centralized their efforts.


Centralized their efforts at our expense. They work for their shareholders.

Buy shares. Become a shareholder.

These are public entities.


"The good old days" are quite often just the ignorant old days where it was much harder to be made aware that the way you experience the world isn't how everyone else does.


The days of custom software are over in general. Every major niche has at least one major player that offers its services for free, selling your data (or ads, etc) in exchange. I'm not sure Sublime Text gets any money anymore now that VS Code does all the same things and more. Custom music players have been dead since Winamp 3. The 1990s and 2000s were like the Wild West for software, now it's like California, fully settled and complete with monopolies.


This is the problem as youve highlighted.

We must now contend with these power brokers and ask their permission for every install.


which days? when people were cheerfully downloading and running keygen executables to crack their Photoshop trials?


> I remember the good old days when people were actually trusted to do their own research before downloading a potentially dangerous exe.

With attacks getting more and more sophisticated just downloading from the legitimate site no longer guarantees integrity. Sometimes not even having the exe signed does it, with supply chain attacks and code injected long before being signed.

Perhaps also posting the hash on a public website (a tweet for example) 24h before making the installer available would mitigate the code signing issue. You could use it as an authoritative source that's less likely to be breached/modified after posting without immediately raising flags. And if it was an unauthorized post the 24h also gives the author enough time to flag it. Supply chain attacks are not considered here.


I will always put more trust to downloads form web sites that have an owner, than apps coming from an app store. It is very easy to put my trust in an app when I see the amount of work has been put in setting up a site (documentation, history of development, etc).

The app stores do not offer any kind of information that would help to put my trust in the developer and application.

Certifactes are a fallacy that tell nothing about the security of the app. They just tell that the app was signed someone that has a security certificate.

Also why do all security experts want to shove their security agenda to the world ? If you are dealing with life or death matter than it is probably better not to expose your world to the wild. For everything else it is OK to live without the safety net, it is OK if someone looses some money or your mum's disk gets encrypted. People fall every day and I sencerly hope that there will not be some police that will force you to wear a helmet ass soon as you get out of the bed.


> I will always put more trust to downloads form web sites that have an owner, than apps coming from an app store.

The article is about signing the Windows binary that you download from the developer's site so when the UAC dialog pops up on installation it shows a valid publisher. Nothing to do with an app store.

> The app stores

Again, different topic.

> Certifactes are a fallacy that tell nothing about the security of the app.

Actually they do. It's a 0 effort way for the user to tell that the installer was not modified between compilation and installation. How do you tell that the developer's site wasn't hacked and the installer doesn't have some bundled malware?

> Also why do all security experts want to shove their security agenda to the world ?

Because that 1 million computer botnet attacking your site may just be made up of a lot of people who don't know what certificates are for, how to check if a download is legit, etc.


Yes, simple hashes that are widely disseminated solve the problem from a practical point of view. Someone tampering with a binary executable or installer cannot alter all of the copies of the hash.

A blockchain could be used for that. When you publish something, take its hash, and the add it to a public ledger.


The advantage of a certificate is that it's 0 effort for the regular user. Every additional layer of "work" someone has to do to check that integrity just lowers the efficiency.

I suggested the disseminated hash method because it would work but most users won't bother checking it. Add blockchain in that and you've lost them completely. Unless you have a 1-click way of checking, something built into the OS ideally, it will only be used by the more tech savvy users.


> most users won't bother checking it

That's also zero effort, though.

I don't care if users aren't checking; I published the hash, so I'm covered. I am not liable for the behavior of random materials, even if they happen to be tampered versions of something I produced.

Even if you make a signed and certified installer, someone can turn it into a malicious unsigned one and people will install anyway. They will click through the UAC and that's that.

There is no benefit in the signing when the genuine program is being installed; there is no attack going on that the signing is protecting against. It's supposed to stop a counterfeit program.

If the goal is zero effort on the part of the user, then the scheme is doomed. The zero-effort user takes no interest in signing; he or she doesn't wonder "how come this dialog is coming up", they just click through it.

Someone who counterfeits programs and adds malware can evne do one better: they can wrap the program in their own installer which has a valid certificate of their own.

The whole thing is a racket. You can't trust an artifact just because it was signed by someone whose only virtue was that they forked out $$$ for a certificate. Everyone has money, from heinous scoundrel to sparkling saint.


Here's an example of how the signature can help a user [0]. If fewer people fall for it that's still a win.

Your argument is that if a mechanism isn't perfect we shouldn't be using it at all. Well seatbelts and airbags also don't guarantee anything but still save people. Security is additive, no layer is perfect but put enough of them together and you can be relatively safe.

[0] https://arstechnica.com/information-technology/2015/05/sourc...


Seatbelts and airbags aren't defenses against a clever adversary that is trying to kill you, and who has the means to replace your real seatbelts and airbags with phony ones.

No but they will protect you from an andversary that doesn’t have those capabilities and will go the more low tech way of ramming you.

No IT system is perfectly secure. None. They can and will be hacked. The point is to raise the bar for that hack. You lock your house, your phone, put passwords on account, etc. even if they can all be broken into. Why do you do that?

You still haven’t mentioned the method you find acceptable. Even what I suggested originally (distributed hashes) can be bypassed. A determined attacker will impersonate the author, will take over the blockchain, will replace enough instances of the hash in the wild to make it look legit, etc. so you complicated everything without guaranteeing anything.

If you indeed support the idea that the cert/signature system is not perfect so it’s useless I’m sure you can mention one that offers guarantees. You’re arguing for the sake of arguing but brought no argument you can support. Have at it.


Those days never existed. Which is why certs, app stores, and walled gardens exist.


Pretty soon thats all that will exist.

Problem is that people suck. They don't blame themselves for doing something stupid. They blame Microsoft. Personal responsibility? Ha!

Also code signing doesn't automatically lead to the "blue-trusted UAC popup" anymore anyway.


You're conflating the UAC prompt with the smartscreen popup. see: https://news.ycombinator.com/item?id=19330901

> I remember the good old days when people were actually trusted to do their own research before downloading a potentially dangerous exe

This.

The sad thing is that this is still as important to do as it ever was -- but fewer people do it.


> I remember the good old days when people were actually trusted to do their own research before downloading a potentially dangerous exe.

The Linux kernel, as of today, has ~20 million lines of code, and hundreds of vulnerabilities.

Since I presume that you are a user of Linux - what kind of research do you, personally do, to identify, and protect yourself from the potential dangers of running your operating system? Have you read through all of those 20 million LOC, and understood them in context? Do you compile your kernel from scratch? Do you completely, and fully understand the machine code that your C compiler will transform them into?


Are you kidding me? Have you never done family support the last 15 years?...


Ah the good old days when nobody except the experts cared to buy, never mind install, third party software.


We made it work and didnt have to ask IBMs permission to install software on its devices.

yes, but that "we" was a very small number of people. Pretty much the same number of people who could do it today.

What really pisses me off is code signing for drivers. To install an unsigned driver in 64-bit Windows 10, you need to reboot your computer into a special menu that can only be navigated with a USB keyboard (which I have to lug out of the closet, since I normally use Bluetooth). That in itself wouldn't be so bad, except the setting persists only until the next reboot!

This is all in stark contrast to macOS's System Integrity Protection, which I can turn off once to never be bothered again.

I understand why Microsoft would enforce higher standards on drivers which can touch the kernel. But, the same fundamental problem applies: it isn't reasonable for non-profit, open source developers—many of whom I consider perfectly trustworthy—to pay hundreds of dollars for a certificate! Let me make the final decision about who I trust. It's my machine—I even built it myself!

The primary place I run into this problem is with drivers to support weird video game controllers.

---

† You can enable a "testsigning" mode via the command line which persists across reboots, but this only seems to work for certain drivers. If anyone can explain why it sometimes works, I'd appreciate it, as my research has never turned up anything.


I've been slowly improving my open source Windows chess program Tarrasch http://triplehappy.com for nearly 10 years. One of my improvement plans has been to put on my big boy pants, and spend the money and time needed to sign the program. I thought it was a big part of the program graduating and becoming a serious software citizen. After reading the comments here I am reconsidering and might save myself the pain. Thanks Hacker News!


I'm going through a "renewal" right now... The archaic maze of validation is also getting on my nerves. It's been three weeks now that I'm waiting for a phone call to validate my phone number. This article is making it so tempting to cancel my order.

The plethora of support emails is what motivated me to get one in the first place. I used to get accused of giving users a "virus" and getting into infinite loops on why they should trust me. I'm sure I was wasting more than $100/year of my time responding to these emails, so I just gave in and got one.

Now, I don't know what to do.


Is it k-software/Comodo/Sectigo by any chance?

Yes, Sectigo via The SSL Store.

Where do I sign for a petition to have a free CA like LetsEncrypt for Code Signing?


LetsEncrypt is a hack to get HTTP encryption working without shelling out money for meaningless identity "verification". Code signing has nothing to do with encryption, so having analogous CA for code would be entirely meaningless.

What does code signing in Windows actually verify? That executable's author at some point paid money to some company that Microsoft deemed an "authority"?

It's a rotten system. The whole CA pyramid is bullshit.

What we really need is a way to know that executable notepad++2.0 is signed by the same person who signed notepad++1.0 already installed on your computer, and that it's the same person who controls notepad-plus-plus.org, and that this identity has existed for well over 10 years. This is legitimately useful info that would allow people to make more informed decisions about what to install.

BTW, the part about historic record seems like one of the few good uses for blockchain technology.


> LetsEncrypt is a hack to get HTTP encryption working without shelling out money for meaningless identity "verification".

Have you used Let's Encrypt? It verifies that you own the domain in question. HTTPS requires that the server you're connecting to has been identified.


Verifying domain control =/= verifying identity.


Operator != owner. I can break into your car, doesn't mean I can take it to the local DMV and get a new title with my name on it purely because I showed up in it.

No, it verifies that the certificate was issued to someone whose ID was checked. Money is paid for the covering the bureaucratic costs and keeping the records etc.


Problem is, this assumes that all CAs and their resellers do that verification properly.

https://security.googleblog.com/2015/03/maintaining-digital-...

https://arstechnica.com/information-technology/2017/11/evasi...

"The third key weakness in the code-signing ecosystem was the failure of certificate authorities to verify the identities of people applying for code-signing certificates. Twenty-seven certificates in the group of 111 misappropriated certificates that the researchers identified fell into this class. Twenty-two of the certificates were improperly issued as a result of identity theft of a legitimate company. In some cases, malicious actors impersonated legitimate companies, in some cases ones that had no involvement at all in publishing software. In the remaining five cases, the certificates were issued to fraudulent shell companies. "


Code Signing is more akin to EV than domain verified.

They're checking organisational or individual identity, which is a work intensive process (e.g. "email me your driver's license, business license, and tax return so I can manually review them.")

It might be possible for a charity to run a FOSS code signing CA, but it is unclear who's paying for that since it needs actual staff.


I asked about this before, a while back. I seem to recall they said they won't do it, because the verification required is completely different - they'd need to verify your organisation, rather than a domain.


Absolutely agree, though the "reputation" racket remains. The problem with EV (and the reason why Let's Encrypt doesn't provide them for SSL) is that such certificates must be tied to a legal entity.


You don't get to sign a petition for someone else to work for free


Maybe OP is willing to help. And in any case, it's still useful. What if he's trying to run a FOSS project and doesn't have $500 for a cert? I would say that's different than someone walking around saying "I want".


Who said anything about people working for free? That's not how the example (Let's Encrypt) works so why are you bringing it up now?

Why not use something like certum[1]? It's $69/year (cheaper if you already have a smartcard), but the CN ends up with something like "Open source developer, [full name]". It's not "notepad++" like the author wants, but it's still better than nothing.

[1] https://en.sklep.certum.pl/data-safety/code-signing-certific...

edit: updated price


"It's $828 per year" for ... a cert? What makes code signing this expensive?


Greed, mostly.

Digicert lists EV code signing certs as $664/yr. But if you are to enter their site through a side door or just plainly cry into the support's jacket, then the price magically drops to $104/yr. And that's for an EV cert! So the only reason there are $600 certs is that there are people who do pay that.


Yeah, I about an EV cert from digicert for ~$100/year. People pay >$600/year?? What? I never even saw those prices! Not sure how I landed on digicert but I think it was a Microsoft article listing where to get EV certs.


And where is this sidedoor?

It's an obscure product with few providers


Is code signing part of what you get from Apple for your $99 developer fee? If so, then that suddenly feels like a bargain.


Yes, that fee includes the ability to sign iOS apps for local distribution and submit them to the App Store, where apple will sign them. On macOS, it also allows code signing for general distribution like Windows has, as well as App Store submissions.

I think the issue most people have with the price is that on iOS there's way more limits on running locally without a cert (7 days max), but for someone who is a practicing developer it's really a pretty low cost. If only it came with a mac to develop on...


>but for someone who is a practicing developer it's really a pretty low cost.

Hah! I don't know about you but when I first started app development I didn't even have a credit card, let alone $99 on it to spend. Thankfully I could install my apps on android for free.


It's also a part of Microsoft Store development fees (similarly $99 last I checked), if you release through the Store all code signing is handled through the Store.

It's businesses' customers who find more than $828/yr of value collectively in seeing a certain icon.


To attempt an answer of your question "why not", I would say perhaps it's not worth $70 a month for the UAC popup to be blue instead of orange


It's 25€ yearly. (store page states that it requires Certum smartcard, but any card supported by Windows works, including windows virtual smartcards)


thanks, fixed.


If you are wiling to sponsor it, contact the author.


It's nice they support FOSS, but both the CA process and Microsoft's tooling for this are stuck in the year 1999:

• The registration process is painfully manual (including e-mailing scanned documents). It's like an "Enterprise" CA from before Let's Encrypt.

• The website wouldn't send the final cert to any browser other than Internet Explorer.

• Microsoft's signing tools are a hot garbage. All options default to "subtly wrong". To get a working signature you need a half dozen flags in an exact order, different from what the official documentation uses.

• Microsoft's docs are either: a) plentiful but only tangentially related vague introduction, or b) scraps of incomplete technical information, mainly for Windows XP only. It's as if they've tried to improve it many times, but every time gave up after rewriting the first chapter.

• Signing can't be automated or used remotely, because its weirdo software wants a PIN entered from the local keyboard.



If you read further down (https://twitter.com/Notepad_plus/status/1098553736656572416), you find out it's because certum didn't allow them to use "notepad++" as the CN (probably because it's not a valid legal entity).


In case anyone reads this far down:

https://docs.microsoft.com/en-us/windows-hardware/drivers/da...

Follow the steps under “Buy a DigiCert EV code signing certificate“.

You’re welcome. ;)


I want to personally thank you for this. I cancelled my order with Comodo/SSLStore and followed your suggestion. EV certificate is already in the mail :)

Interesting that they will check the hashes of dependencies at runtime. But then I start to wonder - why dynamic linking if the library can't be replaced?


Another major benefit of dynamic linking is that multiple processes can share the same dylib in memory -- for things like Apple's AppKit.framework, that can add up to significant savings when multiplied across all of the open apps.

https://www.quora.com/What-is-the-difference-between-static-...


I believe recent Windows and other recent OSs can hash the contents of pages and de-dupe them, which could have some benefits for the same library linked statically across multiple binaries, assuming the identical code ends up page aligned in the same way. This strategy also has benefits for running VMs with the same OS loaded multiple times.

See: https://en.wikipedia.org/wiki/Kernel_same-page_merging


That requires all of the apps to be built with the same exact version of the library.


So does hashing the DLLs before you load them, negating much of the purpose of dynamic linking. That was my point.

Anyway if you really want to get technical, if your sharing method is hashing identical pages, I would bet that you don't actually need the same version of the lib to get nonzero sharing. I would not be surprised if many libs have large ranges of identical pages across point releases. Then it's more a question of: was the same compiler used, did it optimize the same way.


Stability across recompiles is poor.

Then why do binary patches work so well?

The additional constraint of page boundaries will kill a lot of this. But I have literally seen before my eyes, in the form of looking at binary diffs, that library changes wind up a lot smaller than we expect. So I do think it is possible.


Why bother checking the signature of dependencies if the main executable integrity isn't being checked?

What really surprises me is that the author of something as great as Notepad++ isn't making enough money from the project to easily be able to pay for the certificate.


It's not about the price, but about name on the certificate:

> However I cannot use "Notepad++" as CN to sign because Notepad++ doesn’t exist as company or organization

CAs would put author's name as CN, which isn't great, especially for collaborative project.


Yep and sometimes the name people know isn't the name that a CA will permit in a certificate. I have one of those. I'm known as a shortened version of my middle name, say Jack Quimby, but DigiCert and others insist that the cert be issued to Alphonse Jackson Quimby, Jr.

OK I'll just buy an LLC from a state that's cheap (never mind the paperwork) but that's no good either because the new entity had no listed phone number...


>OK I'll just buy an LLC

When I bought code signing certificate for my LLC, in their infinite wisdom CA put "Spółka z ograniczoną odpowiedzialnością" as CN, because that's what they saw on proof of ownership. "Spółka z ograniczoną odpowiedzialnością" literally means "Limited liability company" in Polish.


But they "spend hundreds" on checking the validity of the requester.


I know that must have been a pain in the neck for you, but that's hilarious. Thanks for sharing!


Then I think you'll find this Poland-related story amusing too: http://news.bbc.co.uk/2/hi/uk_news/northern_ireland/7899171....


I love it.

That doesn't seem like it should be a deal breaker, especially for something security related.

Could be related to the CIA replacing common libraries with their own.

https://notepad-plus-plus.org/news/notepad-7.3.3-fix-cia-hac...


Price doesn't seem to be his primary issue.


I suppose we could all demand a refund.

Edit: downvoted? The project is GPL, not a revenue source.


The opposite would surprise me. How would Notepad++ earn any money?


Redhat is an example of a company that makes money on top of extensive open source contributions. I doubt a similar model would work for a text editor. There is a reason most open source projects like this have a corporate benefactor.


Context aware ads


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: