Hacker News new | comments | show | ask | jobs | submit login
Downloading Software Safely Is Nearly Impossible (noncombatant.org)
368 points by danielsiders 1326 days ago | hide | past | web | 260 comments | favorite

The problem is much worse than this contrived 'I cant download PuTTY securely'. Lets choose an example, of which I have had my hands in with my tech support job.

* Goal: "Download Firefox"

First, the user was using IE. And the user is not a tech savvy user (as in, cannot read words on the screen). Turns out, the user's computer was infested with spyware and garbageware. Mainly Conduit and others.

Evidently, user "searched" for firefox rather than follow my directions to type in the address bar https://www.mozilla.org . This behavior lead him here: http://firefox.en.softonic.com/

Normally, I would use a remote support tool and just do the cleaning for the user. However, this client comes from another area in which we are not allowed to use the remote support tool.

In the end, I tried to have user uninstall the bad-firefox, and attempt to install the good, but the softonic installer installs a ton of crap everywhere. User got very frustrated and hung up when having him read the uninstall programs installed list. *

That is the danger to most users, running Windows.

EDIT: For the user whom penalized my comment score, why?

If you are wondering why searching for "firefox" leads people to install malware the main reason is that most of the ads Google shows for that term, and which appear at the very top of the results, take you to sites with malware downloads.

I've mentioned it a few times on here before[1], even directly to Google employees[2] and they don't seem to give a shit at all. I've even noticed searches for Chrome sometimes show links to (what appear to be) malware sites now[3]. Maybe that will motivate them to sort this out but I'm not hopeful.

Putting my conspiracy hat on; it seems like there isn't much motivation for Google to sort out the problem because every new malware loaded PC is another potential convert to a locked down cloud based platform like ChromeOS. There isn't really much downside for them in making Windows as horrendously unsafe for non-experts to use as possible. Why Mozilla aren't screaming about it constantly is more of a mystery to me.

[1] https://news.ycombinator.com/item?id=7101939

[2] https://news.ycombinator.com/item?id=7089727

[3] http://i.imgur.com/yVIMYKO.png

Absofuckinglutely. These sites thrive solely because Google ranks them at the top. Crapware sites appear so consistently at the top of searches for software that excludes the term "linux" that coincidence is many sigma less likely than intent.

Considering that what most Windows crapware installers do is mercilessly track users for the purposes of marketing and that Google's main business is tracking users mercilessly for the purpose of marketing, it is hard to see how this is some random event.

Google's definition of "best search results" is related to their bottom line, and the goal is to be as intrusive as possible without driving searches away. This is why the Google ad network laden "weather.com" is returned before the ad free and purely scientific "weather.gov" from a US search for "weather."

For real fun, search for "pdf to html" and "pdf to html linux" to compare the degree to which Google promotes solutions that collect data on users. A Windows user would never know that a free quick private and powerful alternative is just one VM away.

You are confusing search results and paid search ads. Google will happily put whoever bids the highest in the ads, but they don't force weather.com to rank above weather.gov in the organic results. That's a totally baseless conspiracy theory.

Google decides who can run ads with them and who doesn't. They're running malware ads, and those appear above the legitimate search results.

A lot of ordinary consumers confuse search results and paid search ads, and this is _entirely_ Google's fault. It would be very simple to make them look completely different.

Also, it's not just malware. Google also makes money by promoting scam sites that charge people for free government services.

Of course they do. Unless google is not in charge of the ranking algorithm behind their web search engine.

The search algorithm is not the only tool in the ranking as we know google has a way to penalize a specific website among others.

Explain how Google's corporate officers better meet their fidudiary duty to maximize shareholder value by not structuring search results to optimize revenue than they meet that fiduciary duty by structuring search results to maximize returns.

This isn't a conspiracy. It's basic business and doing otherwise would be the basis for a tort.

There's no actual fiduciary duty to maximize shareholder value (though a few cases in the early 20th century said otherwise). The fiduciary duties of a public corporation are a lot more narrow, mostly involving not prejudicing some shareholders at the expense of others, or enriching the officers at the expense of the shareholders. Making unprofitable market decisions out of principle (even if misguided), however, is not a breach of fiduciary duty. For that kind of run-of-the-mill disagreement over how to run a business, the shareholders' remedies don't lie in the courts, but in their control over the board. Courts in the past few decades really aren't interested in second-guessing strategic/policy/market decisions, certainly not getting into stuff as detailed as whether Google could choose to ban malware ads, for any definition of "malware" they choose.

You're simply wrong. It is in Google's long-term financial interest to continue to provide objective, trustworthy search results. For a non-Google example of something similar, see the recent story of Apple CEO Tim Cook challenging a shareholder who challenged the impact of Apple's environmental policies on Apple's bottom line: http://gizmodo.com/apple-ceo-tim-cook-shuts-down-anti-enviro...

Oh bloody fucking hell, Cook told shareholder activists to piss off because a CEO's job is to tell shareholder activists to piss off. Rarely is it so easy as in Cook's case where the activists were total wingnuts, had no business case since Apple's investment in renewable energy is almost certain to payoff over the long term, and presented a massively unpopular position. They got the microphone because their pitch had homerun written all over it.

In Google's case, their officers are responsible for optimizing the mix of objective search results with revenue producing search results. That optimum can be described as just good enough not to drive too many queries away while maximizing clicks to their customers. There's no legal requirement or demand from shareholders for a wall.

And indeed the very idea of tailoring search results to an individual's past browsing history is always going to push sites that share data with Google to the top of the results page.


You should read Google's corporate filings. It's clearly not a fiduciary duty of their executives to maximize shareholder value. They were very clear about that when they went public. There is also no SEC - or any other - regulation stipulating this as the prime directive for corporate officers.

The constantly repeated 'duty to maximize shareholder value' line is nothing more than a myth.

In the case of Google, the triangle of Page / Brin / Schmidt basically control Google outright, regardless of the other shareholders, due to their voting shares. So they absolutely do not have any duty what-so-ever to maximize anything. Buyer beware, is basically what they stapled to the prospectus.


In fact it's no more complicated than this: if some random shareholder is upset, they can stir the pot accordingly, and the waves they'll make is almost always in proportion to the shares they can vote directly or indirectly. There is no singular objective qualification on what would lead to the maximization of shareholder value, it's an opinion that varies from one shareholder to the next as to what they think is "best" for the company.

Simple example: some shareholders might think it'd be better to slash salaries at Costco to boost the bottom line. Others believe part of the reason Costco is so successful is their employee culture.

If users lose their trust in Google (as I have, FWIW), then it loses its eyeballs to sell.

Short-term-ism in terms of maximizing ads revenue which costs long-term goodwill is a serious negative for Google.

I'd argue that in the past year or three, the company has started showing its vulnerabilities. I'm not sure who will take over from it, or how, or what that company's business model will be, but I see vulnerabilities.

I switched to DDG a while ago as my "default" search. Unfortunately, Google does reliably provide better results. They'll maintain a user base as long as that's true; I still switch to them when DDG fails.

Well, it provides better results when it doesn't rewrite your search into oblivion in an attempt to save you from typos you didn't make. Unfortunately, DDG being worse at everything else still leaves Google in the lead. But it also still leaves me very unhappy when I search.

You may be a better typist than most. For me, Google is fixing a broken search most of the time it makes changes.

Usually it's turning an uncommon domain specific word into something uselessly common (and completely unrelated) for me.

Or when I put an entire error message in quotes and it deems the number of results for that error message too low to be intentional so it deconstructs it into a useless mashy search of all the words in the error. Note, I don't mean when the results are zero, but even then I usually have to spend an annoying amount of time before I realize that my search actually had zero results instead of the millions it claims it had.

There was a time when google's cleverness was just enough to be useful, but it gets more and more clever (and frustrating) every year now.

Convincing Google that I don't want its assumption that I want a typo fixed is getting more difficult.

The dynamic Google results page means that it's really difficult to refine a search based on the presently visible results which disappear as I update the search. I find that that behavior incredibly annoying, and greatly appreciate that DDG doesn't do this.

It took me two goes to make DDG stick but since June of last year (just after the Snowden revelations started) I've been using DDG as my primary and nearly exclusive search engine.

It's much better than the first time around: more relevant, faster, and very few technical hangups.

I still fall back to Google periodically, especially for:

• Date-bounded search. DDG doesn't support this.

• Specialty searches: news, books, scholar. I've also keyed up custom searches for a bunch of sites in my browser.

• Rarely: I don't seem to find what I'm looking for on DDG. Usually first an !sp re-search, if that fails, !g. About 2 times out of 3, I still don't find what I'm looking for and return to DDG for more refinement.

RSA maximized their profits up $10 million by selling their customers out to the NSA, and that cost them some serious long term goodwill and reputation. Were their shareholders happy about that piece of financial calculus?

Ignoring for the moment that you are wrong about them having to maximise shareholder value, and ignoring that just taking any money doesn't maximse value anyway, you are still wrong.

Google has a dual class stock. The only shareholders with any power are Larry and Sergy. That was done to avoid short term thinking (precisely like you are proposing). Investors know this when they buy on.

Learn to write clear sentences first. Then you can add big words and concepts.

A long time ago, I started treating downvotes as a critique of my writing. My first response is always to edit the post to express my ideas more clearly, my second is to consider if the comment is doing little more than inflaming passions for no benefit to the community. In the latter case, I tend to delete the post.

This is a case where I posted in a rush. The basic idea that Google's ranking algorithm is optimized to serve Google's interests first and those of its customers second is the only possible way for Google's officers to fulfill their legal obligations to the shareholders they serve.

The key to understanding this idea of the best search ranking algorithm is that people who query Google's search engine are not Google's paying customers. Google Search's paying customers are almost exclusively advertisers.

The best search results Google can produce are those which maximize their revenue. Not enough traffic directed to ad buying customers and advertising dollars may go somewhere else. Sure too much obvious selling might drive queries elsewhere but the threshold for tolerating advertising keeps going up. So many people take tracking across sites for granted that Google can push a "weather" search onto an advertising affiliate's site and still meet the expectations of the data point making the query. There is no objective reason other than income for ranking secondary sources above the primary source, "weather.gov".

Occam's razor just cutts that way.

This is a bit simplistic, and I'm not saying that to be rude.

It's true that searchers don't pay Google money, and advertisers do. But Google is running a platform. In the past I've compared it to an information marketplace. And the goal for Google is to make the market run as efficiently as possible, otherwise they risk losing one side.

Searchers don't pay Google, but they do (presumably) pay Google's advertisers, who pay Google. If you lose the searchers, you lose the advertisers.

Now of course there is a balancing act, which you allude to in your last paragraph. But there are plenty of easy examples where Google returns no ads even though they could. A search for "how old is barack obama" just returns the number (or Wikipedia), without ads, even though I'm sure there are advertisers out there who would pay for an ad to be shown.

So obviously it's not universally true that "the best search results Google can produce are those which maximize their revenue." Perhaps adding some subtlety to your argument would help me understand exactly what you're saying.

All abstractions are simplified. That's what makes them both abstractions and useful.

For example Google Search might be considered a marketplace, but such an abstraction might lead a person to lump buyers and sellers into amorphous blobs and ignore the heterogeneity within each group. Ford and overstock.com are different sorts of advertisers and thus Google's business comes down to segmenting end users.

Plain and simple the most valuable end user segments are people who not just tolerate tracking and targeted advertising but who actually derive value from it. They are valuable not only because they click through and buy stuff but because they validate Google's claims that its business of tracking users and pushing ads and tailoring search results toward commercial interests and away from the long tail is objective.

Long tail results are not revenue generating and Google has simply removed bit by bit the end user's ability to specify them. Sure spelling correction is useful, until Google search refuses to respect quotation marks and simply renders some terms unreachable. Local search is useful, until a person wants to search across borders or outside their local language.

But not abstractions are factually wrong, like you original premise was. Big words and complex sentence structures don't make up for that.

Thers is also the scandal of sites that hijack government sites and charge people to receive benefits they could get for free by going direct.

This is also a problem for search queries like, "yahoo phone support" or "apple phone support", both of which have bitten some of our customers in the past, directing them to call-in scams: "oh no ma'am, according to Microsoft your computer is infected with viruses, I need remote access to your system right now, we can clean it up for $89..."

Google's malicious advertising is the number one reason that we're able to justify installing AdBlock Plus on every client's system (we talk to them about it first) and disabling ABP's new "feature" to "allow non-intrusive advertising".

Sites that depend on ad revenue should be screaming at Google to fix this.

Please, when you see any malware, go to http://www.google.com/safebrowsing/report_badware/ and report it.

Google does care, and will take action to disable malicious advertising. For many of these sites, there is no obvious badness on either the ad or the landing page, so a manual report will help us fix malicious advertising.

I work on some small portion of Google's systems related to automatic malware scanning (albeit, not anything that would show up on the search results page), and I want to make sure that we don't direct people to malicious advertising.

Thanks! I've bookmarked this and will try it in the future.

But: as an experiment, I just turned off ABP and Google'd (heh) "yahoo support". One of the ads at the top was for http://www.aurasupport.com/email_service; at a quick glance, I see a website template from http://pixel-industry.com/website/, lots of broken English, and a domain that was registered just last Summer to a house address in a suburban development in Texas. Not exactly a smoking gun, but also probably not what somebody's looking for when they search for "Yahoo support"...

And, farther down the first page of the actual search results is http://www.yahoosupport.org/, which has a toll-free phone number in the title, 1-888-551-2881. Googling that phone number takes you down a rabbit hole of lots of dirty SEO (e.g. https://www.youtube.com/watch?v=AEO5-2RpYvo), no actual customer reviews anywhere, and offers for support for lots of services -- including, uhm, Gmail (http://www.password-recovery.us/contact-us, look at the page title).

So, I'll be happy to use the link you gave, but this seems to be a fairly serious problem, and I'm a little surprised that Google doesn't have a better handle on this.

It should be reasonably simple to add a "report malware" button next to banners when the search includes the word "download". It would be less simple to review flags every malware provider would make on every competitor, but the beauty is, you wouldn't need to - when someone asks for "download firefox" I'm pretty sure all banners advertising Firefox will include some form of malware.

I actually did that once, for some badware on "download minecraft".

All but one of the badware ads I reported disappeared, within 20 minutes too! (but why one remained I have no idea!)

There's caring and there's caring.

Is it something you can easily and obviously do, in place, when you find links to spam & malware sites?

Or do you have to know a special URL?

I wonder why Mozilla doesn't go after these sites for trademark infringement. Mozilla made Debian change the name of their Firefox package to Iceweasel because they made modifications; surely bundling in adware also violates Mozilla's trademark policy?

It does. Google goes after them as well. However, there is generally a large delay before they manage to take them down and verification is always done post-display and not pre-display. You can report bad adverts at https://support.google.com/adwords/contact/feedback?hl=en and Google will eventually get to them, usually after their account balances have run down a bit. There's definitely no sign of any urgency on that front.

It turns out Debian is much more responsive to Mozilla requests than adware providers, and going through court takes a long time.

Why not complain to Google though? They have policies about the use of trademarks in ad text.


I've done so dozens of times, without result. Maybe those sites walk the thin line of that policy, or maybe Google just doesn't give a fk.

Iceweasel actually modified Firefox, this sounds more like bundling Firefox with other stuff in the same installer package. Vaguely reminiscent of distributing a Linux ISO that installs Firefox, along with other stuff...

Except that the Linux ISO is not called "Firefox." It would be OK for the Linux distribution to use the Firefox trademark descriptively to say that it includes Firefox, but that's not what these adware vendors are doing. They're calling their "Firefox bundled with some adware" Firefox.

Exactly--if you top off the air gap in a coke bottle with piss you can't go around selling it as Coca Cola.

It's like that on purpose though. You can't be rude to your customer after all. In this case, advertisers are paying Google a lot of money so they don't have to go through things they wouldn't like, such as harsh verification. Users of Google search aren't really important in the same way. If you lose a couple search users to malware, who cares? At least you got the advert money up front.

I don't think this is their mentality, I think this is more like a myth that gets perpetuated ad nauseaum.

The problem with a search engine is that there's no lock-in other than brand recognition. Google won over AltaVista and Yahoo by being superior. I still remember the first time I tried it, it was so much better that it made an instant convert out of me, even though typing "altavista.com" was rooted in my nervous system and this was in the days before they were big, before AdSense/AdWords/AdX. And I could see it at Internet Cafes catching like fire, within mixtures of technically oriented and unsophisticated users alike.

And it can happen again. What pains do normal users have when using Google lately? Malware, content farms, "aggregators", too many ads. The only reason for why Google is still number 1 with a near monopoly is because there is no better alternative. DuckDuck Go is awful for me. Bing too. You may not notice how awful they are, unless you're living outside the US.

And I'm pretty sure they know that they can lose users fast. And once a significant chunk of users are gone, advertisers will be gone too. That's why Android exists in the first place, though distributing Bing as the default doesn't really help Microsoft, so having your own platform only protects you against walled gardens. And there's even a bigger danger therein. Google doesn't even have to lose users for advertisers to leave - Google already knows that the majority of clicks on all served ads are done by a minority and advertisers are increasingly aware of this fact too, as quality conversions are going down. This is because targeting is not so good after all and because users are increasingly fed up with spamy results and annoying ads that aren't targeted well.

I think their problem is that they are trying to solve this through algorithms only. The problem with algorithms is that algorithms can be gamed, you only need to find the ranking formula, which can of course be done through trial and error. It's a whac-a-mole game basically.

But for popular searches, like Firefox, they could have exceptions in there to propel those as first results. Is it not obvious that users searching for Firefox actually want Firefox, the browser from Mozilla.com, in spite of Mozilla.com's ranking? Is this against their policy or something? And now that they have Google+ accounts, why don't they add a "Report Result" button? If flagging email as Spam in GMail works so well, why didn't they do the same thing for their search engine?

"The problem with a search engine is that there's no lock-in other than brand recognition."

This is not quite true. The 'lock in' are the advertising channels. If you want to replace Google you need replace their lock on advertisers. I have direct visibility into the effectiveness of Google's, Yahoo's, Microsoft's and third party advertising networks, I can state with certainty that if Microsoft was able to show Google advertising feeds on their search property, even with Google taking 20 - 30% off the top, it would be Microsoft's most profitable division, swamping the profits from either the Windows licensing stream or the Office licensing stream.

Your point about _advertiser_ lock-in is true in the short term.

However, the limited _user_ lock-in means that a 'better' search engine could take user share, which would then make it more attractive to advertisers.

Well in my case (@Blekko) I'm a startup that has worked at taking market share from Google organically. We actually crawl the web and index it, and that takes hardware and network bandwidth. I recently had the opportunity to look again at what a 'small' cluster would cost to run in EC2 (about $2M/month so $24M/year). We don't do that, since it would be impossible to make any money if we did, but even just break even on that sort of investment is hard to achieve without advertising support. Trust me when I say that the search advertising business is very much a sausage factory.

You're just a startup so you're money strangled, but $24 million is pocket change for other companies - so what stops a bigger company that already have their own data-centers and enough talent, such as Microsoft, or Apple, or Facebook, or Twitter, or whatever, to create a better search-engine? I think that's simply because it's a very hard problem to solve.

Thanks. Is the point that it's chicken-and-egg? You need users to get ad revenue, but you need ad revenue to improve search quality and thereby attract users?

Many people, especially older ones and people with bad or old monitors don't even realize they are ads. There's purposefully no border and a light background is used to confuse people into clicking ads.


I'm pretty sure this isn't intentional. No one at Google is going to be using a crap monitor. I've used similar colors in business software before. It wasn't until I was at a conference and saw our software on a crappy screen that I realized the required field color was white on some monitors.

Sorry, but you can't tell me that any aspect on the Google search results page, of all the pages on the internet, isn't fully intentional.

On my shitty laptop's screen they literally are indistinguishable.


Dealing with Windows nowadays is akin to cleaning a septic tank. I wish I was kidding

I'm thinking someone should build a (Linux or other *IX) that scans the HD of an infected machine (booted with this distro or the HD was removed and put on another machine) to scan and remove everything. Working directly on Windows is impossible

> I'm thinking someone should build a (Linux or other *IX) that scans the HD of an infected machine (booted with this distro or the HD was removed and put on another machine) to scan and remove everything. Working directly on Windows is impossible

I don't think that would be all that useful. If you have a malware-infected system then your focus should be rescuing user data, and not cleaning the infection (which is pretty much impossible to do reliably).

> your focus should be rescuing user data

Good luck figuring out if the user data is the cause of further infections. Add some infected pdf, tif, jpg files and it could come right back. In the terms of less common applications there are likely thousands of libraries used by these programs with data interpretation exploits waiting to be be found.

>I'm thinking someone should build a (Linux or other *IX) that scans the HD of an infected machine (booted with this distro or the HD was removed and put on another machine) to scan and remove everything.

Kaspersky did it.


Also bitdefender: http://download.bitdefender.com/rescue_cd/2013/

and AVG: http://www.avg.com/us-en/download.prd-arl

although I am not sure if they are linux based.

They are. I've used the kaspersky and AGV live disks. It's nice because you boot up and you can download new definitions without something hogging your bandwidth and clicking through a bunch of popups or what ever.

Trinity Resource Kit: http://trinityhome.org/Home/index.php?wpid=1&front_id=12

Among other things, it boots a live linux CD that's packaged with a handful of 3-5 antivirus scanners.

I was going to suggest Chocolate -- it's like Homebrew for PowerShell, but it doesn't always install things where you want it to. And as it turns out, PuTTY might not be maintained as much as it once was -- or that's the rumour.

I was shocked by how many forks of PuTTY there are -- and most suck: https://github.com/FauxFaux/PuTTYTray/wiki/Other-forks-of-Pu...

Many recommend KiTTY but with an ad supported site that complains about adblock when you go SSL (after a warning) and includes hidden features -- well, that's not something I'd trust either.

My day-to-day dealing with Windows being very limited, I find this interesting. Is it your impression that the security features introduced since Vista have been ineffective and only taught naive users to always click the "yes" button?

My experience in a tier 1 tech support job is:

     1. The user clicks yes. They might read what they clicked yes to.
     2. The user doesn't usually read prompts
     3. The user will not read error messages, even if they are to their benefit.
     4. The user doesn't know how to "google" problems
     5. The user will sometimes even ignore IT professional help even when they call us. This is usually a "bitch session" with no clear resolution.
     6. You will tell the user what to do, and they will also say yes. They still don't understand.
     7. Users will install software and not understand how to remove said software (control panel<remove software).
     8. Users are not curious. What would be obvious to simple reading they will ignore.
     9. Only when the computing environment is unbearable, will they call in. Or it will be the simpleton user who wants you to do their work for them.

Only because that's traditionally where the users have been. If everybody switched to desktop linux, the malware would target linux instead.

Linux doesn't have a tradition of downloading and installing random software, though, so there would be a bigger behavioral hurdle to get users to install the malware.

I think the behavioral differences are almost entirely because current Linux userbase is mostly people who have self-selected Linux over the default OS their laptop came with.

(And I still see a fair number of Linux install scripts that look like "curl ... | sh")

Linux has a nice repository, with nearly all the software you need available, signed with known good keys.

Linux does not have "your mouse pointer moved. Are you sure you want to proceed?" dialogs.

Linux has a manageable set of file permissions, including the "execute" permission being set by the users, not by any random server from where you download your file. (Yep, there was some regression here lately.)

And, of course, Linux is actually hard to compromise without user intervention. Differently from Windows.

If you really believe it's the users fault, you have your head deep buried on the sand.

No you

No it can be worse than that the install instructions say:

  `curl http://example.com/foo/install.sh | sudo sh`
or if it does request the script by https there is a fair chance that the install script itself will then download by http.

Equally bad is adding a new key to apt/(other package manager) and add a new source then apt-get install (which definitely runs as root).

I blame that on overconfident MacOS X developers not understanding best practices for security.

The tradition was there, and it sucked.

Hunting the Internet to track down random dependencies, watching freshmeat.net every day for new releases, and fighting with all of the different build systems was what I did every day ca. 1997.

Yeah. The key word here is "was".

In other words, installing (and upgrading!) software in Windows has not made much progress since 1997. I blame MS's decision "oh sure, you can integrate your program into Windows Update, it will only cost you $BigBucks per year."

No, you just have to install from source if you want something that your package manager doesn't provide and then hope you can actually get all the dependencies compiled.

Windows is such an open door (NPI) its incredible. Stupidly trivial malware can rewrite desktop icon shortcuts to append some url so the browser will start on it. I spent hours trying to help a neighbor before stumbling on this 'hack'.

i'm only familiar with f-secure but imagine any av vendor would have something like this(but yea, what if system files were infected?)? https://www.f-secure.com/en/web/labs_global/removal-tools/-/...

I don't believe the original example was contrived. It sounded like a real series of steps that this guy went through. But you're right, it's even worse if you don't have any idea what you're doing.

This is somewhat mitigated with the Android Play store and the iTunes app store. I haven't used the Win8 store thing yet, but it should resolve these issues for most users. That's why even though I prefer an open ecosystem for my own devices I do appreciate the need for semi-closed ecosystems for non tech savvy users.

> Turns out, the user's computer was infested with spyware and garbageware

You're recovering from a system compromise, an unrelated problem from downloading Firefox. Hints to get started: unplug net, image the HD for later forensics, reformat & reinstall OS, carefully restore non-executable data files from backup.

Like I said, we are a tier 1 tech support. What I do is work for a university. We also do tech support for another university (contacted out to us), and that other university is a 2 year trade school in Indiana.

Our service is mainly troubleshooting simple and mild problems, and escalating difficult issues to the relevant departments. A problem like spyware is a constant in our job, causing problems ranging from system instability to mal-rendering web pages.

Ideally, a backup/wipe/reinstall is the idea solution here, BUT these are regular non-tech-savy users. The computer is a means to an end here. "It goes beep-blip-boop that my teacher wants" is the kind of thought. Explaining that a reinstall is ideal is ignoring the fact that this operation would cost ~200$ at Best Buy or similar place.

Frankly, I use what tools I can. If that means manually uninstalling what I know is spyware, rebooting in safe mode and running Malware Bytes, I'll do exactly that. At least it triages the problem until next week when the user goes and downloads something Adfly or Softonic... again.

The user would get a blank look in their eyes if you told them that !

save yourself some potential issues and use getfirefox.com

Unfortunately, they would just search that as well. And if the machine is already infested, it's a conduit search at that.

The problem is Windows. IE is too easy to hack and break. Drive-by exploits are routine with IE and Windows-Firefox. There is little security and process limitation in Windows to prevent an active expoit. Worse yet, I cant even delete an "open file" without special tools. In any other OS, that's not a problem. File gets removed. Windows acts like a toddler.

BTW, Xubuntu works well here for friends and family whom I support. It just works, they're nigh invulnerable to exploits and problems are a kill and rm away.

EDIT: For the user whom penalized my comment score, why?

Probably because you claimed that the example was contrived when it is actually very real, and is something many visitors to HN have specifically faced, and is a particularly ironic case given that the target software is critical security software, given essentially the keys to the lands.

Ultimately many of us simply have to assume that Google's pagerank is the surest sign of the credibility of a link, but of course we know that can be gamed. And even had this site been on an appropriate domain with HTTPS, even that shows little given the completely lack of vigilance by most SSL cert firms now (partly at the behest of the tech community who find the cert process annoying and expensive).

I have a few arguments about the way the article took his/her argument down.

     1. There was no mention of http://putty.en.softonic.com/
     2. If you are looking for an SSH client, you are not a normal user and should know that CYGWin or Putty are freely available.
     3. Using whois is considered more advanced end-user techniques. 
     4. Continual complaints that the download server doesn't have SSL. It's annoying and bad. Trying it again and again is kind of pointless. You made your point.
     5. Tries to get the author's key via the MIT PGP server. This is NOT end-user stuff here.

That was exactly his point, I guess. If an advanced user strugles like this to securely download some piece of software imagine a regular end-user.

> This is NOT end-user stuff here.

Did he claim it was? How is that relevant?

You seem to be arguing that the article wasn't an entirely different article. As __david__ rightly mentioned, he doesn't say "this is what the average person encounters". Indeed, I think the situation is even more profound given that it is, quite specifically, talking about security software that fairly adept practitioners use.

I hate to be _that_ guy, but for someone who values the opinion of others as you apparently, do, using 'whom' on the internet isn't likely to win you over any friends. Let alone using it incorrectly...

No you don't. If you do, just don't be that guy :)

Eh, fair enough. He just struck three nerves on a Monday morning.

1. proselytism 2. grammatical solecism 3. inflating the value of a "downvote"

In hindsight, I should have kept my mouth shut. But hit those three before my caffeine deficiency has been addressed, and I'll likely find myself in _that_ role.

> 2. grammatical solecism

Is there any other kind? I will admit the whom-bomb was unpleasant.

OP: If the answer is not him/her the question is probably not "whom."

If you think you're safe: it's the same thing with Linux. Yes, good distros sign their blobs and you can probably verify that with builtin tools.

However, consider how distros generate their signed binaries:

1) A packager downloads a random tarball off the internet, often over HTTP and/or unsigned and unverified.

2) The packager uploads the same tarball to the distro build system (you trust them, right?)

3) The packager's script for building the program or library is executed by the build server (you trust all of the packagers, right? they have implicit root access to your machine during pkg install.)

4) The packager's script likely invokes `./configure` or similar. Now even if you trust the packager, the downloaded source has arbitrary code execution. You verified it, right???

(Not trying to advocate for webcrypto. And I'm a Linux user. But I'm also a packager, and I have some awareness as to how one would go about pwning all users of my distro.)

Sure, but you have to trust someone. How do you know the baker you bought some bread from didn't put hallucinogenic drugs into your bread this morning?

The key is to limit the number of people you trust and remove instances where you mistakenly trust more people than you believe you do. When downloading a .exe over http, you trust an unknown number of people working at each company your packets hop over to reach the server. You are implicitly trusting each and everyone of an unknown number of people with direct root access.

With a Linux distro this is different: you are trusting the distro and any employees/volunteers of that distro. You trust that the distro is actively vetting the people involved - or is at least in a position to publicly name them if they break the trust of users, etc. Ultimately you do still have to trust someone, though.

Debian, at least, has proven to be fairly trustworthy so far. Who has access to ae-5.r23.londen03.uk.bb.gin.ntt.net and what do I do if they MITM my traffic? EDIT: Any why can't they spell London correctly?

Londen is the Dutch spelling of London, so could be a network link maintained by some Dutch provider?

ntt.net is Japanese...

> 1) A packager downloads a random tarball off the internet, often over HTTP and/or unsigned and unverified.

Then they're being remiss in their duties.

> 2) The packager uploads the same tarball to the distro build system (you trust them, right?)

Yes, I do.

> 3) The packager's script for building the program or library is executed by the build server (you trust all of the packagers, right? they have implicit root access to your machine during pkg install.)

There is at least traceability here. There are a large number of packagers for my distro, true - but they are required to personally sign for the packages they upload. If one of them turned out to be malicious, I don't think this would be without consequence.

Honest questions: what do you think the consequences would be, and how do you think they would be enforced?

I think they'd be banned from the project. If it looked to be malicious, I can see a lawsuit happening, though that would probably be a slow process and end in a settlement of some sort. Packager identities are verified against legal identity documents; depending on your threat model that may or may not be an effective barrier - a nation state can probably afford to burn a few identities, but regular criminals not so much.

It might not be malice on the part of the packager. It could be that their machine is deliberately compromised.

It would certainly make a big fuss.

First the identity of that person would be stigmatized to a point where it wouldn't be usable anymore to gain trust to other projects. Publishing rights certainly would get revoked for that user.

Then all packages published by him/her will need to be analyzed for further exploits and discussions would happen to avoid future similar issue. If possible a patch/fix would get published by the distribution.

Well... To me there are two very serious issues with typical packages for Linux (and I'm a long time and a die-hard Linux user, so I'm not criticizing Linux here).

One of them being that you typically must be root to install packages. This mean that if anyone manages to slip a backdoor in any moderately used package, it probably means "root" on many Linux systems.

Some people have been complaining about that for years. Thankfully we're now beginning to see things like "functional package managers", where packages can not only be installed without admin rights but can also be "reverted" back to exactly the same "pre-package-installation" state if wanted.

The other very serious issue is that most package builds are not deterministic. I think everybody should begin to take security very seriously into account and realize that deterministic are the first (and certainly not only) step to take towards software which can be trusted a bit more.

There are thankfully quite some people who are now taking the deterministic builds route and one day we should, at last, be able to create the exact same package on different architectures and cross-check that we've got the same results. This won't help with backdoors already present in the source code but it's already going to be a huge step forward.

So, yup, I take it that, of course, as a packager you know how to pwn all your users.

As a user I wish we had a) deterministic builds, b) functional package managers, c) packages which can be installed without being root.

If we had that, there would be less ways to pwn all the users of one package at once. I'm a Debian user since forever (and I love the rock-stable Debian distro) and I'm not expecting Debian and other big distros to move to such a scheme anytime soon (it's probably too complicated) but there may be a real opportunity here for newer distros who'd want to focus on security.

> One of them being that you typically must be root to install packages.

  ./configure --prefix=$HOME/opt/$pkgname && make && make install
It's not pretty, I'll admit, and package managers could probably help here, if they're building from source.

> but can also be "reverted" back to exactly the same "pre-package-installation" state if wanted.

For system-wide packages, most package managers do support this. Since they don't support user-only packages, of course reverting an install isn't going to happen.

If you've installed it yourself, `rm -rf $HOME/opt/$pkgname`.

> The other very serious issue is that most package builds are not deterministic.

Deterministic builds are hard.

> be able to create the exact same package on different architectures and cross-check that we've got the same results

Unless you're cross-compiling, different architectures by definition nets you different builds. Even within an architecture, differences in feature sets (take advantage of Intel's shiniest instruction?) and compile time options (use this library?), where to install, etc. cause the number of possible build combinations to multiply quickly. Binary distros like Debian have it a bit easier, as they usually distribute a lowest-common-denominator binary with all features, but some distributions (I'm a Gentoo user) let you tune the system more.

Even if you had all the things you name, you still have to trust whomever is packaging your software. Or build it yourself after reading the entire source. (And then there's the chicken-and-egg problem with the compiler.)

Manual builds work fine for some programs, but when you have to have a dozen dependencies that now also have to be built manually, and those have their own dependencies...

Just a quick note for those following along at home, I recommend having a look at xstow for managing custom package(trees). Basically it's:

    mkdir ~/opt/xstow
    cd /tmp
    # get package (verify signature)
    cd package
    ./cofigure --prefix=$HOME/opt
    make install prefix=$HOME/opt/xstow/package-version
    cd ~/opt/xstow
    xstow package-version # xstow -D to "uninstall"
I find that helps a lot when you need to install new versions, and don't want to worry about cruft left over -- and also simplifies handling of PATH, MANPATH, LD_LIBRARY_PATH and LD_RUN_PATH.

Some packages are a little harder, but can sometimes be tricked to behave by moving ~/opt/xstow out of the way, doing a make install and then moving ~/opt to ~/opt/xstow/package-version-etc and xstow'ing.

> The other very serious issue is that most package builds are not deterministic.

It's virtually impossible to create deterministic builds of common software. There's random sources of data and variables all over the place. And more importantly, deterministic software != secure software. You could make a perfectly deterministic piece of code, compile it, run it, all the same on all hosts. It can still be rife with security holes.

I don't know what you mean by 'functional package managers'. There's plenty of 'functional' package management software out there used by millions of people every day. If you just mean easier to use, there's that too.

You can already install software without being root, very easily in fact. It just won't work very well because a lot of software is designed to operate with varying levels of user and group permissions, and varying system capabilities. And again, more importantly, there are plenty of privilege escalation exploits released all the time that could get to root from your user. Malware doesn't even need to be root if all it wants is control over your browser sessions or to siphon off your data. Root-installed software is as big a deal to a single user system as a vuln in your copy of Adobe Flash.

> It's virtually impossible to create deterministic builds of common software.

Nope, it's definitely not. If projects like Tor and Mozilla (they're working on it) can do it, then the 99.99% of packages out there which are less complicated than Tor / Mozilla can do it too.

> It can still be rife with security holes.

You're just rewriting what I wrote: I didn't said it would mean the software would be secure. I wrote it would already be a huge step forward.

> "I don't know what you mean by 'functional package managers'."

I mean for example this:


> Root-installed software is as big a deal to a single user system as a vuln in your copy of Adobe Flash.

Definitely not. Especially in a system like Linux where it's easy to have multiple user accounts (including one used just for surfing). I'm a "single user" and I do have several user accounts on a single machine (including a user account which I use only for surfing the Web). No Adobe Flash here btw and no Java applets either (I'm a dev and I do typically target the JVM, but there's no way I'm allowing Java applets in a browser) ^ ^

You say: "deterministic builds cannot be done", "there's no point in having deterministic builds because there could still be security holes", "local exploit is as bad as root exploit on a single-user machine"...

And I disagree with all that. And thankfully there are people disagreeing with you too and working on tomorrow's software packaging/delivery methods.

I thank Tor, for example, for showing us the way. The mindset really need to change from: "it cannot be done, it's too complicated" to "we can do it, let's follow the lead of the software projects showing the way". There's a very real benefit for users.

The "why":


The "how":


Honestly I simply cannot understand why there are still people arguing that deterministic builds aren't a good thing or people arguing that it cannot be done.

It can be done. And it's a good thing.

"A packager downloads a random tarball off the internet, often over HTTP and/or unsigned and unverified."

Unless the packager is on the mailing list for the project, which many are as it helps them keep up to date on changes.

"The packager uploads the same tarball to the distro build system (you trust them, right?)"

Now the package is in one place, so when you say, "SSH on Fedora seems to open a connection to this random server in Ft. Meade!!!" everyone else can check and see if that is what is happening. Now you have thousands of people investigating the bug -- not so bad. Compare this to, "I downloaded something that is supposed to be PuTTY, which I found via a Google search, and it is acting funny!"

The fact that everyone who uses Fedora or Ubuntu is running the same code is pretty helpful. It is not much, but it does help.

"The packager's script for building the program or library is executed by the build server"

In a chroot jail, or an SELinux sandbox, or a VM, or any number of other environments that help to isolate the build process from the rest of the system. In theory, the build server has quite a bit of protection from malicious packagers.

Also worth noting is that packagers' actions are logged and would probably be audited if a user sounded the alarm and nobody could figure out what was happening. It would take a lot to pwn the users of a distro in any meaningful way, because keeping it secret is hard -- your victory would be short-lived.

The whole "package signing" thing can be validated against the source tarball with a bit of due diligence. The key is "source packages". As in, `apt-get source`. See:


When you do that you'll get the original tarball that was used to create the package along with any patches. You can compare that tarball to one you can find on the Internet (usually in more than one location) and they usually have md5/sha1 signatures.

The rabbit hole goes very deep and yet there are people (like me!) who actually do do that sometimes. I suspected that the tuxtyping package may have been modified (due to corrupt .wav files being included) and went through the whole rigamarole of validation. I double and triple-checked the signatures against the binaries, sources, and everything else I could find in the package repositories (and mirrors).

Turns out it was just some filesystem corruption that added some extra bytes to the tail end of those .wav files. They're harmless.

While this is all true, there are a couple of other important considerations:

1) packagers have a history with their packages. They fetch from the same source, verify with the existing GPG key, etc. They aren't fetching from a different random source for each build. As part of the package review process the upstream source should be reviewed and confirmed.

2) Everyone in the distribution is using the same code. Unlike Windows/OSX the users get the same binaries from a single source. This allows problems to be corrected quickly for everyone at the same time.

consider how distros generate their signed binaries

Do you have any actual evidence that Linux distros do packaging this way?

But I'm also a packager, and I have some awareness as to how one would go about pwning all users of my distro.

And do you actually do so? If so, please tell me which distro you package for so I can avoid using it. If not, why do you think other packagers do?

Different distros require different metadata for their packages. But in general amongst the top 10 distros, almost all metadata in a package is optional. You don't have to specify the URL where it came from or who the author was in order to get a package accepted. Some distros do require a URL, mainly because they build from source to install on a system. But other distros merely accept a source-package which bundles the source code and that's built on their build servers and released after being peer-reviewed. But the peer-review is a manual process, so it's human-fallible.

As an example, let's compare the way two distros (Fedora and Debian) package an old piece of software: aumix.

Taking a look at this spec file [1] for fedora, we see two pieces of metadata: a URL to a homepage, and a URL to the software. The URL is not used for packaging at all; it's merely a reference. The URL to the file can be used to download the software, but if the file is found locally, it is not downloaded. And guess what? That source file is provided locally along with the other source files and patches in a source package. So whatever source file we have is what we're building. This file doesn't contain a reference to any hashes of the source code, but the sources file [2] in Fedora's repo does.

With Debian we have a control file [3] that defines most of the metadata for the package. Here you'll find a homepage link, which again isn't used for builds. The path to a download is contained in a 'watch' file [4], which is again not referenced if source is provided, and generally only used to find updated versions of the software. There are no checksums anywhere of the source used.

The source to aumix actually provides its own packaging file [5], provided by the authors. Apparently the URL used here is an FTP mirror, not the HTTP mirror provided by the earlier packagers. Could that be intentional or a mistake? And could they possibly be providing different source code, especially considering the hosts themselves are different?

It's clear that there's a lack of any defined standard of securely downloading the source used in packages, much less a way of determining if the original author's source checksum is the same as the packager's source checksum. There are several points where the source could be modified and nobody would know about it, before the distro signs it as 'official'.

[1] http://pkgs.fedoraproject.org/cgit/aumix.git/tree/aumix.spec... [2] http://pkgs.fedoraproject.org/cgit/aumix.git/tree/sources?h=... [3] http://anonscm.debian.org/viewvc/collab-maint/deb-maint/aumi... [4] http://anonscm.debian.org/viewvc/collab-maint/deb-maint/aumi... [5] http://sources.debian.net/src/aumix/2.9.1-2/packaging/aumix....

These are all good details about how much information various distros give me, the user, about the sources they're using for their builds. I certainly agree that it would be nice for them to give a lot more.

But this is still secondary to the basic point: as a Linux user, I get packages from my distro, not from the upstream source, so I don't have to go searching around the Internet for packages or package updates, wondering whether I've got the right source, wondering why there isn't an https URL for it, etc., which is what Windows users have to do according to the article (and OS X users too, for the most part, though the article doesn't talk about that). The distro does all that, and either I trust them to do it or I don't (in which case I go find another distro). The fact that the distro doesn't make it easy for me, the user, to see how they verify the sources they use, does not mean they aren't verifying the sources they use.

Also, while it's true that the distro verification process is human-fallible, as you say, and it would be nice if every OSS project made it easy for distros to automate the process instead, it's still a lot less human fallible than having every single user go searching around the Internet for software. Distro packagers at least have some understanding of what they're doing, and they at least know who the authoritative source for a particular package is supposed to be without having to depend on Google's page rank algorithm.

Yes, Linux software is generally less prone to erroneous installs than Windows software, when it is done through your distribution. However, I think a parent commenter was pointing out how much easier it is to hack all of the users with this unified system of installation.

Is searching for, downloading and installing Putty actually resulting in users with malware-laden files? It would seem not, as the highest-ranking results for Putty are the official ones, and downloading/installing is a breeze once you get to the official page.

For software that's a more likely target for scams (like Firefox) you'll find a lot more user error and potential for abuse. And consider that many users may download and install Firefox by hand instead of using their distro (it's faster and less complicated). And similar to the attack on popular Windows end-user software, Linux server software is often a more high-value target for attack also results in users unknowingly installing insecure software, as we've seen in[1] many[2] cases[3].

Realistically the only thing keeping Linux more safe is that the user base and culture are different. But it would be naive to assume that somehow distro packagers are a more trustworthy source of files than the ones you could find on your own. It would seem to completely depend on the application and the user.

[1] http://www.darkreading.com/attacks-breaches/open-source-proj... [2] http://arstechnica.com/business/2012/02/malicious-backdoor-i... [3] https://security.stackexchange.com/questions/23334/example-o...

It looks like Windows 8.1 is whitelisting PuTTY by hash or signature: nothing to see here.

Repro steps (Windows 8.1, desktop IE 11 or Chrome 33):

1. Download putty.exe from any shady source

2. PuTTY runs without prompting

3. go to mega.co.nz (an extremely shady source), upload your copy of putty.exe

4. download it again

5. this version of putty.exe also runs without prompting

6. open your hex editor of choice, change a byte in a text string

7. upload this tampered version of putty.exe to mega.co.nz

8. download and run it

9. observe full-screen modal red banner: "Windows Protected Your Computer" requesting an Administrator password to run suspicious binaries.

If almost all binaries are treated by Windows as suspicious (in general: if there's a whitelist), then a request for an administrator password will be unconsciously and automatically given.

I think a giant red banner to an experienced user (someone installing SSH on Windows) will cause pause to any user who needs to care about this sort of thing

Alternative explanation: there's no central whitelist, windows just checks to make sure that the internal checksums all match up?

(I don't know which is true, but if we're speculating, I imagine the latter would be more realistic than a huge checksum database bundled with the OS. Though maybe it's a small checksum database that only includes the personal favourite tools of the windows developers? :P )

I don't know how exhaustive the database is, but it looks like it just phones home to MS: http://news.softpedia.com/news/Windows-8-Secrets-the-Built-i... (second screenshot). Might do the same "bloom filter plus server check for false positives" that browser malware filters use.

Checking with a valid but extremely uncommon downloaded binary shows the same red banner (as well as a Chrome warning).

Before reading the article, I wanted to write a rant on why the TFA is wrong, based solely on the title :-) ALAS, I was wrong, especially because I downloaded Putty myself from putty.org, whenever I happened to play with Windows machines, without thinking once that putty.org is not the official source. And I'm a very security conscious user and if I can't protect myself, then normal users don't stand a chance.

Just a note - PGP signing renders HTTPS useless for downloading the binaries themselves and works by establishing a chain of trust, the problem is with distributing the public key. It's the public key that must be distributed either over HTTPS and/or through a public key server, letting other users digitally sign your certificate and thus endorse the association of this public key - a system that works great for popular repositories of software (e.g. Debian), in which the participating developers/maintainers know each other. Once the authenticity of the public key is correctly established, there's no way for an attacker to create/forge the signed binary, unless said attacker gets ahold of the private key, which is way more difficult than hacking a web server, as normally private keys don't end up on those servers (so it is more secure than HTTPS). For example, in Ubuntu if you're willing to install packages from PPAs of third-parties, you first need to indicate that you trust the public key with which those packages were signed, otherwise apt-get will refuse to install said packages.

A reasonable alternative to PGP signing is S/MIME signing, which is more user-friendly, as it doesn't involve the users vetting scheme, but rather certificates are issued by a certificate authority, just like with HTTPS/SSL. S/MIME is weaker against the NSA, but it does work well for signing stuff and it's more user friendly, because to establish trust, you only have to trust the certificate authority (and of course the developer).

Binaries on OS X are also distributed as signed with the developer's key and OS X refuses to install unsigned binaries or binaries signed by unknown developers, unless you force it to. And while I have mixed feelings about the App Store direction in which Apple is taking OS X, I've began to like this restriction, in spite of the money you have to pay yearly to register as a developer (as long as you can download signed binaries straight from the Internet and thus not completely locked into Apple's walled garden, it's all good). Signing binaries and having a user-friendly way to establish trust in the used signing key should be the norm in all operating systems.

As an aide, Windows has had code signing for a LONG time:


If I still ran a bunch of Windows machines used to access the internet I'd be seriously attempting to run with group policy set to block unsigned executables and, eventually, trimming the of trusted code signing CAs as well.

As someone who occasionally builds open-source programs for Windows, code signing is a nightmare.

Now, not only do you have to use some weird kludge like Mingw or Cygwin to get an autotools-based program to build on Windows, you now have to either pay a bazillion dollars to some shadowy certificate cartel in order to produce a binary, or convince users to click through a scary-looking dialog which assures them your program is an evil virus that will steal their data, format their hard drive, and empty their bank account.

It's nothing more than a protection racket -- "Sure would be a shame if 'somebody' started telling users malicious lies about your program...Now, about our fee..."

> you now have to either pay a bazillion dollars to some shadowy certificate cartel in order to produce a binary, or convince users to click through a scary-looking dialog which assures them your program is an evil virus that will steal their data, format their hard drive, and empty their bank account

Aren't we talking about a fee on the order of one developer hour per year? I share the annoyance that there isn't a better way to handle trustworthiness but look at it from Microsoft perspective: the odds are actually quite good that your users are seeing that scary dialog because someone actually is trying to scam them into installing adware, trojans, etc. which will degrade their system and almost certainly lead to complaints about how slow / buggy Windows is.

I think the long-term answer is going to look a lot like Apple's sandboxing on OS X: app-level permissions and scariness of dialogs proportionate to how much access an app wants outside of that sandbox. Until that's widely accepted, people are going to double-down on the code-signing path and that's going to lead to a lot of intentional slowness in the verification process. This sucks but I don't see a better solution with the current security models.

Never knew Windows had it, but then again I'm not a Windows user, however whenever I interacted with Windows machines it's been easy to install unsigned binaries downloaded from wherever (with warnings of various kinds, with the warning I distinctively remember being "this app is downloaded from the Internet, are you sure?", which is basically useless).

Is this policy not enabled by default in the various versions of Windows? I think it is in Windows 8, right? What about 7? And why don't developers use it?

> And why don't developers use it?

1. Because a code signiging certificates are not cheap (although with some tricks you can shrink the prize a lot: http://stackoverflow.com/questions/3091938/cheap-code-signin...)

2. Because code signing is snake oil. Lots of malware is signed even with leaked driver certificates of some Asian hardware manifacturers; thus getting some leaked code signing certificate should be easy if you are used to writing malware.

Well, code-signing with a certificate authority involved works only if you can trust the certificate authority AND the developer that signed the binary.

Debian/Ubuntu's main repositories work by vetting for the maintainers that package the official binaries, with the proper keys being distributed as part of those distros. OS X works by validating if the key is a developer registered with Apple.

No system is perfect, I'm sure there's plenty of malware available for Debian/Ubuntu or OS X, including repackaged popular software that looks almost legit to the unsuspecting eyes, but this process works much better if you can have a second authority (in addition to the certificate authority) that is able to say "this developer did bad things so his key is no longer considered valid" and this authority must be the maintainer/governor of the operating system as there's nobody else that really cares about malware. For Debian/Ubuntu that would be the community / Canonical, for Windows that would be Microsoft, for OS X that would be Apple.

I think what's missing actually is an ability to really cancel a certificate and issue a new one for a piece of software. This doesn't work with Windows code signing for a number of economic and technical reasons. For example, you can't verify that a certificate hasn't been revoked when installing offline, and you can't resign packages you have given to other developers to distribute.

So what this means is that code signing does not work for redistributable software or for hard media very well. Ideally it would work if you have a single trusted online source for signatures but then this renders the system far less useful for many applications.

I am not sold on code signing. We sign our code and centrally publish the signatures separate from the code, so that while the code may be signed, the signatures can be updated if we need to change the key.

It's not that the developer does something bad but that if you can't revoke a compromised key effectively and safely for all parties, then you can't revoke it at all. And code signing in its current incarnations (whether with RPMs or MSIs) has serious problems there.

If you don't need Windows Logo Certification, you can use a Comodo code-signing certificate for $99/year, and apparently some resellers sell it as low as $66/year. It's maybe not "cheap" for a freeware hobbyist, but most small commercial companies/developers should be able to cover that.

Another reason not to use it is that some developers have tried A/B split tests and found that users are slightly less likely to install the signed version. It will depend on your userbase, of course.

>comodo certificate

Sorry for slightly off topic message, but do NOT buy comodo code signing certificate, especially if you are a person and not a business.

Their verifying procedures are really insane and their support terrible. That's probably why are they so cheap.

I bought mine as a single-person business, and verification was indeed a huge hassle. However, I purchased it through a reseller (KSoftware: http://codesigning.ksoftware.net/ ) who acted as support and escalated issues with Comodo on my behalf. They also have a free Windows app that makes code signing easier, though I think I still use the X2Net Code Signing tool in my build chain.

If you want a reason to be wary of Comodo, there's the Comodo security breach incident to consider: http://www.infoworld.com/t/authentication/weaknesses-in-ssl-...

Early stage startup that ships Windows software here: no, code signing certs are not that expensive.

We paid $397 for a two-year code signing cert with DigiCert. Extended validation, which we would have happily paid for, costs about 2x but require physical access to our build server (which we don't have, using Azure / EC2.)

The price we paid for our code signing cert is comparable with the SSL star cert that we use - probably actually cheaper on an annual basis.

Developers do use it.

If you don't use it, IE gives you scary warnings that you have to fight through to download the software.

Most installers for windows software are signed. I just checked my downloads folder and of the more than a dozen installers in there every one was signed.

> I was wrong, especially because I downloaded Putty myself from putty.org, whenever I happened to play with Windows machines, without thinking once that putty.org is not the official source.

If you had used duckduckgo, you would have known better:


Official site. It's one of my favourite DDG features. It's just weird how Google doesn't have it.

Here are Google's results for "putty" (first two links are the official website): http://imgur.com/SVWJhmg,UV3UT1w#1 ; Here are DuckDuck Go's results for "putty" (first link is putty.org, second is the official source): http://imgur.com/SVWJhmg,UV3UT1w#0

Granted, DDG does give better results for "windows ssh client".

I don't use DDG because it's awful for users not living in the US. I'm not talking about the interface, but about local results. It also doesn't get the context well - when I search for Ruby or Python on Google, I get different results than my wife does ;-)

Here are DuckDuck Go's results for "putty" (first link is putty.org, second is the official source): http://imgur.com/SVWJhmg,UV3UT1w#0*

That's weird; when I just tried ddg, the correct site was first and putty.org was second.

Note, just tried it, and if you put in a more likely cased "Putty" or "putty" in the URL, the official site badge isn't offered (though PuTTY official page does show up in top 3).

It's really AOL keywords all over again - nothing bad, but just not Google's style.

But what it does give you is the option to re-search on "PuTTY" from the quick-info box, which I almost always do when looking for an authoritative info source.

I just tried 'putty download' on DDG (what I would have probably used on Google). putty.org was first, followed by chiark.greenend.org.uk/~sgtatham/putty/download.html (but lacking any indication this is the right one) and cnet, which I already know sucks. On Google, www.chiark.greenend.org.uk came first for that same query. Entering 'putty' alone on Google also lead straight to chiark, even with "private results" hidden.

Then I tried 'putty' alone on DDG. I got a Wikipedia-style disambiguation (cool!), and clicking the obvious alternative lead directly to the chiark result, this time bearing an "Official site" badge.

The moral is obvious. You can't trust code that you did not totally create yourself. -- Ken Thompson[^1]

[^1]: Reflections on Trusting Trust. ACM Turing Award Lecture, 1984, https://dl.acm.org/citation.cfm?id=358210

That one is a must read. It definitely opened my eyes when I first read it, and I thought of myself as someone very security conscious at the time.

Since then, I think I've actually become less security conscious though. The sheer number of ways you as a user of any of the useful parts of the internet are screwed is mind boggling. At some point, you just throw your hands up in disgust.

Dude, I don't even trust code I do write myself. I can trust myself not to be malicious. But I can't trust myself to be perfectly vigilant against cutting corners all the time (everyone does this, especially those who think they don't). And I can't trust my coding to be perfect.

That is about as helpful as telling people that they can't trust food that they didn't grow themselves. Has no relevance in the modern world where unfortunately you can't easily grow your own food. In other words pick your poison and your battles.

Did you bother reading it? Did you get to this part:

To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software.

I didn't read it.

My reaction was to the statement which I replied to. I would take that either as a summary of the document or your conclusion on the document.

Is it necessary to only comment after a full reading and understanding of the base document that someone is summarizing?

If someone pulls out a phrase "The press must learn that misguided use of a computer is no more amazing than drunk driving of an automobile." (from the same document) I think that stands on its own as worthy of replying back to without seeing what else has been written that might clarify it. I don't think this is cherry picking and pulling things out of context either.

First, I would expect that if someone like Ken Thompson says something like this there is more to the comment than you are going to get from the blurb. You really must read it.

Secondly, if you read it, you will recognize that what he is talking about is a real issue, particularly in the post-Snowden era and is totally relevant to the article. And his point really is that you can never be sure there isn't a backdoor planted somewhere in your software or hardware. Even if you write all the code yourself, it is still operating in an environment that you did not build.

Understanding this problem at its root is the beginning of an understanding into why depth is so important to IT security and why all our current approaches at trusted binaries are inadequate at least on their own.

Reading three pages by Ken Thompson takes effort. Retweeting some Snowden meme is easy.

What can you say to someone that thinks Ken Thompson has no relevance in the modern world or thinks he does not need to read something in order to judge if the one sentence he pulled out was taken out of context?

> What can you say to someone that thinks Ken Thompson has no relevance in the modern world or thinks he does not need to read something in order to judge if the one sentence he pulled out was taken out of context?

That those who do not understand UNIX (or history!) are destined to reinvent it badly (shamelessly stealing from Harry Spencer).

> Is it necessary to only comment after a full reading and understanding of the base document that someone is summarizing?

In my opinion, yes. I don't know how to say it politely so I apologize for the abrasiveness: I think that when someone comments on something with little to no knowledge of the subject they are talking out of their ass. I try to avoid doing this because it seems like a waste of time for everyone involved and is therefore rude to other members of the community who want to engage in an intelligent discussion.

> I don't think this is cherry picking and pulling things out of context either.

How would you know? Without reading the document you have no idea what the context is.

I hate the "feudalization" direction that OSes are moving in -- requiring certificates, app stores, etc. At the same time, I get why it's happening.

It really mirrors the historical reasons for feudalism in the real world. When the Roman empire collapsed, people needed protection from marauding hordes. So they cozied up to the nearest powerful group, forming kingdoms. People tolerated the abuses of kings and nobility in exchange for protection from anarchistic threats.

That's exactly what's happening to OSes: people are accepting feudalization in exchange for protection from malware.

Unless we find ways to really empower the user here, it's only going to get worse. We will end up with a fully feudal Internet.

I think the general message here that a lot of commenters are missing is that the Right Thing is way to flipping hard to get right. The fact that PuTTy itself is not distributed securely seems to underscore the fact that even highly interested hobbiest have trouble getting it right. How can you expect everyone to be secure when you expect them to be security experts to get everything right?

Or in other words, despite clearly thinking they're the smartest people in the room, security programmers are dumber than shit when it comes to actually making it possible to use their software.

Downloading software safely is nearly impossible on windows. Probably because there's no demand for it - people who care about security don't use windows. PuTTY is one guy's hobbyist project.

(If you insist on using windows, what about downloading SUA from microsoft themselves? That way you get a working SSH client without trusting anyone you weren't already trusting)

>>>> Probably because there's no demand for it - people who care about security don't use windows.

I actually DO care about security and go to great lengths to secure my Windows 8.1 laptop and several physical Windows 2012 servers I maintain. If you know anything about security, it's actually pretty easy to secure a Windows machine.

One of the main reasons I'm stuck with MS is because there are no Adobe products available on Linux. The alternatives suck donkey balls and I hate Apple (too long of a story to get into here). Thus, I use Windows.

Full disclosure, I have several Linux boxes running Mint and an Ubuntu server and I'm actually quite fond of Linux. Unfortunately, a majority of my development is done with Adobe tools so I'm stuck. Give me a decent set of dev tools that mimic my Adobe tools (Fireworks, Illustrator (not Inkscape), Photoshop (not GimpShop), InDesign and now their Edge tools) and I'd drop MS in a heartbeat go 100% Linux.

If you know anything about Windows security you will know that it is completely broken by default and there's no workarounds. I know, I know, "Citation Needed". Here you go:

> Neither the NT hash nor the LM hash is salted. Salting is a process that combines the password with a random numeric value (the salt) before computing the one-way function. Windows has never stored hashes in human-readable form, so there has never been a need to salt them.

Taken from Microsoft's own documentation: http://bit.ly/1juRmCT (Using Bitly because the link has parens and HN sometimes doesn't like that). Microsoft's argument boils down to this: Because they don't provide a GUI to view password hashes they're secure!

People gave LinkedIn and Adobe crap for their leaked password dumps not using a salt (or using reversible encryption!) and yet here we have MILLIONS of Windows installations all over the world doing the same damned thing. What's worse is that Active Directory also doesn't store password hashes using a salt. If your domain admin credentials ever get stolen the attacker is mere minutes away from cracking every damned password.

For giggles, here's a tool you can use to dump not only your saltless Windows password hashes but the actual passwords. This is because Microsoft also uses a reversible encryption scheme: http://blog.gentilkiwi.com/mimikatz

> For giggles, here's a tool you can use to dump not only your saltless Windows password hashes but the actual passwords. This is because Microsoft also uses a reversible encryption scheme: http://blog.gentilkiwi.com/mimikatz

The passwords it displays (using the "sekurlsa" module) are from probing the process memory of LSASS.EXE to view cached credentials. So it will only display the plaintext credentials of currently logged in users.

Consider for a moment a successful zero-day attack on a Windows workstation (say, an image vulnerability in Outlook because that actually happened):

* The attacker can immediately view the user's password or just look at the saltless hashes (if they want).

Now let's compare that to, say, a Linux desktop (where similar zero days are extremely rare):

* Attacker will have to install a keylogger (usually by messing with the user's .profile, .bash_profile, .bashrc, etc) and wait for the user to spawn a new shell process where they actually enter their password at some point. This could be a long, long time or never at all if the user doesn't use the shell much. * Alternative: Because the shell option is unreliable (especially with more distros/businesses disabling LD_PRELOAD by default) the attacker will usually just run a program/script that prompts the user for their password. Unsophisticated users will probably just enter it but most IT professionals (the ones with privileged access) would immediately have a "WTF?" moment and that's very risky to an attacker.

I'd also like to mention that with things like SELinux and Apparmor it is very difficult for a userspace process to mess with the memory of another userspace process even when they're running as the same user. This makes it much more tricky to escalate privileges or obtain a user's password on a Linux desktop than Windows. Then there's the fact that on Linux there's a plethora of tools to detect and stop things like fiddling with LD_PRELOAD in user profiles.

...and if you don't like how it works you can actually change it! The source code for the kernel, the shells, the desktops, etc is there for you to do with as you please.

The plaintext passwords may well be sitting somewhere in memory even on Linux (e.g. see [1]) but I do agree that the organised caching of such passwords in Windows is a weakness.

I'm quite surprised that LSASS.EXE is not protected in the same manner as AUDIODG.EXE [2]. Just goes to show how DRM protection is considered more important than system security.

[1] http://philosecurity.org/pubs/davidoff-clearmem-linux.pdf

[2] http://msdn.microsoft.com/en-us/library/windows/hardware/gg4...

>>> Now let's compare that to, say, a Linux desktop (where similar zero days are extremely rare):

Two fatal flaws in your assumption.

1) Linux desktops in enterprise settings are incredibly rare. Linux servers? Way more common, but I can't remember any large corporation or enterprise using Linux desktops - it just doesn't happen.

2) Zero-day exploits DO happen to Linux. Would you be surprised if I told you:

"Vulnerabilities in the Linux kernel fixed in 2012 went unpatched for more than two years on average, more than twice as long as it took to fix unpatched flaws in current Windows OSes, according security firm Trustwave.

Zero-day flaws — software vulnerabilities for which no patch is available — in the Linux kernel that were patched last year took an average of 857 days to be closed, Trustwave found. In comparison zero-day flaws in current Windows OSes patched last year were fixed in 375 days."


>>>> .and if you don't like how it works you can actually change it! The source code for the kernel, the shells, the desktops, etc is there for you to do with as you please.

I actually surprised you made this point, considering its been shown multiple times where malware and rootkits have been introduced into various Linux kernels. Just because something is open source, doesn't mean everybody is going to take the time to examine the source code and make sure its clean.

From 2009: http://www.darkreading.com/vulnerability/attack-sneaks-rootk...

"The attack attack exploits an oft-forgotten function in Linux versions 2.4 and above in order to quietly insert a rootkit into the operating system kernel as a way to hide malware processes, hijack system calls, and open remote backdoors into the machine, for instance"

"But Linux experts point out that the technique Lineberry is demonstrating at Black Hat indeed been has been deployed before with the so-called SuckIT rootkit, and as far back as the late 1990s with direct kernel-object modification (DKOM) rootkits]."

I can't speak for the guy you're quoting, but I'm sure this is relevant in the context of software you mistakenly trust executing such code. Naturally, if you're not logged in, there's less vulnerabilities to worry about. Hell, I can disconnect the power source if I'm after that kind of security.

However, this is actually an important exploit / factor to be aware of if you're considering installing software securely on a platform. Is it possible to blast through account credentials with a small method like this? I certainly wouldn't want to run any scripts on my machine if I knew that was possible on Linux, because merely being on the sudoers list would mean that my credentials are enough to damage my entire workstation. This doesn't even speak to the effect this would have if you got access to a sysadmin's account on a large network.

You're exactly right: This is a huge vulnerability in Windows and the really important part is that it's an architecture vulnerability. Meaning, to fix it would require changes to how Windows works at a fundamental level in such a way as to break backwards compatibility (which is sacrilege in Microsoft land).

Consider for a moment all the tools and mechanisms in place to synchronize Active Directory passwords across domains, realms, and even 3rd party systems. Every one of those would completely break if you were to implement simple change such as the use of a salt.

That's why I've been saying for many years now that, "if you care about security do not use Windows." There's no mechanism available to actually make it secure because you can't change how it works internally. The best you can hope for is some obfuscation/hacks/tricks in regards to hardening (e.g. rename Administrator account, use entirely different credentials for administrative tasks, disable zillions of insecure defaults, etc). Then just hope you're never targeted.

If just one workstation is compromised an attacker can elevate their privileges to that of Domain Administrator with a few simple steps:

1. Install keylogger or password-dumping tool. 2. Force workstation to unjoin from the domain or cause some other problem that requires a Domain Admin to login to correct the issue. 3. Use credentials of Domain Admin to access a Domain Controller. 4. Dump the entire password database of Active Directory. 5. Crack the password database using some GPU instances in minutes.

After step #2 the attacker basically "owns" your network and can do whatever they want. You can mitigate it by joining Windows workstations using credentials that only have the power to perform a join but this is usually just a minor setback for an attacker as there's a plethora of tools and tricks they can take advantage of to escalate to Domain Admin.

For more information on how easy all this is: http://pentestmonkey.net/uncategorized/from-local-admin-to-d...

Maybe you could run these Adobe tools in a Virtual Machine ?

The very fact that you need to download Putty speaks to how ridiculously out of date the Windows default tools are.

It's no wonder so many web developers use OS X or Linux.

Microsoft's big blunder is that they assume that "developer" is synonymous with "Windows developer."

They have excellent tools available for Windows developers. But step out of that silo and man, just forget it. You have to fight your own system every step of the way.

This is an example of a strategy tax (http://scripting.com/davenet/2001/04/30/strategyTax.html): Microsoft's strategy for a loooong time was to pretend like Windows was the only thing that existed. When Windows was a majority monoculture, this worked pretty well for them. But now that it's just one ecosystem among many, it forces their developer-tools people to pull their punches so as to avoid undercutting their Windows people.

Glad someone made this point.

What I don't understand is why Microsoft don't just include an ssh client in their software with some half decent key 'gui' key managing nik-naks. In that way your starting out web developer would not have any need to consider installing a linux distro.

Just by not having any linux friendly terminal they are inviting people to go dual boot which can end with them finding they don't need Windows or even Adobe as much as they thought they did. It makes as much sense as building the Berlin Wall.

Last time Microsoft started packaging tools you would think would be built in they got sued, can't blame them if they're once bitten and twice shy.

people who care about security don't use windows

Can you elaborate about what you mean with the pretty general term 'security'? Cause I've heard this before, and using both some linux distros and some windows versions I never felt particularly unsecure on any of them. Maybe that's a false feeling though - but how to check it? E.g. last time I checked, during normal operations, none of my boxes would have in- nor outbound connections to any peers that I didn't know of. And last time I ran a bunch of virus/malware scans on the windows boxes everything was fine as well. But if I understand you correctly your claim is this is not sufficient?

Well the article's talking about having a way to safely install software - my general view is that while UAC and firewall-type mechanisms mean malware authors need to be more careful, they don't provide any firm security guarantees. So if an attacker can get you to run an executable they've provided, even as a non-admin user, they win. If you download executables over HTTP, and run them without taking steps to verify them, you are likely vulnerable; signature-based virus scanners are not to be relied upon, and any malware scan run after you run the program could at least in theory be defeated - if nothing else, by a Blue Pill-style hypervisor.

Now, how realistic a danger a HTTP MITM is it depends on your threat model; I don't think anyone's doing this on a large scale (other than governments in places like Iran and North Korea). But if you're worried about attackers targeting you specifically, then I think this is a valid threat vector.

MS tried to address this in earlier windows with digital signing of downloads (you get a warning if you try to run an unsigned executable, and a different warning identifying the publisher for a signed one, assuming the executable in question is marked as having been downloaded from the internet), and by integrating an app store into windows 8 (I think?). If done correctly, this would prevent this kind of attack - if you only ever run signed executables, and you trust the signers, you're safe. But the fact that even PuTTY, supposedly a piece of security software, is an unsigned download, suggests that this approach hasn't really spread through the windows software ecosystem yet. (The other approach that might work is extending UAC into some kind of full containerization approach, and isolating applications more fully from each other).

Microsoft itself says that if you're really serious about security, you run the headless version of their servers, as 70% of security bugs are in the GUI.

Now, a desktop system is always going to run a GUI, but a server has its attack surface reduced by not having a GUI, and most Windows servers have a GUI.

I posted some details as to why Windows is inherently insecure at a fundamental, architecture level in the thread above this one:


Also consider for a moment that everything is executable by default in Windows. Meaning, you download a binary from wherever, double-click on it, and it will execute.

Here's how that works on a Linux workstation: You download something, explicitly set the execute bit on the file, then you can double-click on it to execute (assuming it is statically compiled for the correct architecture).

The average user does not know or care how to set the execute bit on a downloaded file. This alone is a huge hurdle for attackers to overcome.

I know/get that, so it kinda answers the first question.

But then the other one remains, basically for any OS: if signature-based scanners aren't good enough, how do you properly check you are secure? Is something like connection logging sufficient?

SUA was deprecated in windows 8, and removed from 8.1.

Downloading software safely is nearly impossible on windows.

The amount of hyperbole in this thread is amazing, but so far your comment represents the absolute peak of it.

I hope you're proud. It's quite an achievement.

Maybe install Cygwin for access to ssh on windows.

I came looking with find(). cygwin gets you ssh, git, nmap, etc.

People complain that OS X requires apps to be signed by Apple (by default). But in reality, it's the sanest solution to this problem.

When the OS enforces signature checking, you don't have to worry about whether it was downloaded over HTTP or who owned the domain name.

> But in reality, it's the sanest solution to this problem.

Absolutely not. It puts Apple in total control over user's software. You have to place all of your trust in Apple that the binary you're running is actually build from the source code it is supposed to be.

Now, over in the free digital world, this problem is being addressed sanely. For example, NixOS and GNU Guix are tackling the issues of reproducible builds and package signing that can use a distributed web of trust. This way, no one has to trust a single company/entity or build machine. Debian is also after reproducible builds.

Sanest if you already trust Apple with root, which if you are using an Apple computer, you do.

> Absolutely not. It puts Apple in total control over user's software.

Well, yes, but your OS vendor is already in total control over its users' software.

The only difference this makes is that you don't have to trust anyone else.

It doesn't care who the developer is though, which is an issue. As long as there's a valid signature, the application runs.

(btw, if you don't want to enable "allow all apps" on OSX, just rightclick (two finger click) and pick "open" from the context menu. it'll prompt for launch as opposed to just "no.")

I wouldn't care about OS X doing that if iOS didn't eliminate that "by default". It frightens me that Apple might someday eliminate it on OS X too.

These controls are a great idea as long as the user has ultimate control. Apple does not seem committed to the idea of letting the user have control.

The sanest solution to the problem is for OSes to fix their permission models and realize that app isolation is as important as -- or more important than -- user isolation.

Easier said than done, of course.

One solution I advocate for is more widespread adoption of Chocolatey (http://chocolatey.org).

I can

cinst putty

and get what I need automatically.

Sure, I have to trust the maintainer, but you know, if more people used Chocolatey to install packages, more people might be able to ensure it's safe.

It's not bulletproof but it sure is better than searching the web for the right download.

From https://chocolatey.org/install.ps1 , which is fetched from the main install snippet, downloads Chocolatey over plain HTTP.

Which is then executed in a PowerShell with -ExecutionPolicy unrestricted.

There's an 80/20 thing here, though. Except more like 99.9/.1.

Yes, there is value in ensuring software is delivered without tampering direct from a trusted source. But the main problem people are dealing with is finding a trusted source for the install - one that actually delivers the software they wanted, without malware, without a confusing installer. Chocolatey solves the main problem pretty well. I can look at download counts, comments, and repos to verify what the installer is doing. There's an active forum that discusses problems or suggested improvements to packages.

It doesn't verify that there's no tampering along the way, but for most users that's an absolutely miniscule problem compared with the "Google / Click Link / Install Wrong Program and/or Malware" system.

So, let's fix that.

While that's a nice sentiment, it indicates a rather complete lack of understanding of security issues by chocolatey in the first place.

Sure, they can move that particular download to https, but it doesn't install any confidence that they've thought through the rest of their flow. As far as I can tell, packages don't even need to be signed.

As a result, I'd not be able to trust anything they do.

Which goes back to my original statement. If more people used it and knew about it, maybe more people could get involved.

This stuff is hard. And presumably we're doing this programming stuff because we're not afraid of hard problems.

One problem at a time - end users can't find the right links to download, so this solves that.

Now open some issues about the security stuff and let's get that patched up.

Telling people to not use Windows, or that "X is flawed so rather than fix it I'll avoid it" isn't moving anything anywhere. I'd rather get involved.

Ninite is another option with a similar purpose.

Correction for step#10: the Putty keys are on the MIT keyservers, just not under Tatham's name, although they're only 1024-bit keys: http://pgp.mit.edu/pks/lookup?op=vindex&search=0xEF39CCC0B41...

I challenge anyone to try and find a Minecraft mod without adware or spyware. Conduit and AdFly are everywhere.

AdFly is not adware. It's an ads website that displays you some ads for a few seconds before you can continue to a download, do not confuse it with adware.

Adfly is well known for being a malware distribution channel. Ads run on adfly support flash/js/everyhting else and aren't vetted, nor do they have any form of anti abuse.

The Minecraft mod sites are full of drive-by crapware. I know, my 6yo watches Minecraft videos all the time and wants a new mod every day, so I download and install them for her ... her own machine is Ubuntu of course, 'cos the 2006 video card doesn't work in Windows any more but is just fine in X ;-)

I hope that sometime in the near future, when everything has been locked-down so much in the name of security that the situation becomes the exact opposite, someone will write an article titled "Downloading Software Freely Is Nearly Impossible". Don't get me wrong, I think security is a good thing, but I also think there has to be a balance between that and freedom. One of the most secure places to live in is a prison.

As the saying goes, "Those who sacrifice freedom for security deserve neither."

Ah! A trick question game. The correct answer is to wipe off Windows and install Linux off your flash drive, right?

Don't forget to install your perfectly-secure favorite text editor [1], of course!

The vast majority of 'attacks' on Windows are based on exploiting user trust: phishing, malicious binaries, and so on. Suggesting Linux as a fix to that is nonsensical, unless you expect every software under the sun to be both included and properly verified (reality: it's not).

[1] http://www.sublimetext.com/ -> http://c758482.r82.cf2.rackcdn.com/sublime_text_3_build_3059...

Where did you get that flash drive of Linux?

From my Linux laptop? Or I guess I could have downloaded it using an Android tablet and torrents and copied it to a usb stick. Is this one of those threads where we keep asking 'and where did that come from?' until we reach the first dollar earned from selling lemonade?

> Is this one of those threads where we keep asking 'and where did that come from?'

At some point you have to trust someone. But who should you trust, and how much should you trust them?

Most people do not think about that and so we live with an Internet where privacy is almost impossible and most people just don't care about that.

Either that or you could inspect every line of OS source code before compiling and then inspect every machine code of compiler executable to make sure compiler is not infected.

Instead of inspector your normal compiler's machine code, you can create a small special purpose compiler to begin bootstrapping your main compiler from source. Most compilers (including GCC I believe) are specifically designed so that they can be bootstrapped from a relatively small subset of the language. Additionally, you do not need to worry about producing an efficient executable because you will only ever run the resulting program once.

However, there is also the risk that your host OS is compromised, in which case it may simply lie to you and do whatever it wants.

Even if you manage to guarantee OS and everything else safety, you still have to trust your own sanity.

Don't worry, I confirmed my sanity last week. I think.

then you end up as abrasive as Theo De Raat.

Is this one of those threads where we keep asking 'and where did that come from?'

Yeah, pretty much. As soon as I say I trust X, then you know the first place to attack because I haven't secured it.

No. That recursive process can be short-circuited by verifying the secure hashes, and establishing the integrity of the installer binaries and source tarballs you've been using to install Linux.

And where are you going to get the hashes from to verify them?

If the paranoia runs that deep, and there's enough anxiety built into the scenario, then a substantial amount of responsibility must be adopted before embarking upon your journey.

This means your options are limited, but if you believe you have a real adversary, then your adversary defines the scenario.

Option 1: Obtain source code, and secure a build environment. Review the source code. Build from source, and test the behavior of the built product. This approach incorporates some cognitive disonnance, particularly when building crypto software from source. The axiom "never roll your own crypto" brushes closely against building a tool like PuTTY from source. How do you know you did it right? Well... does anyone REALLY ever know?

Option 2: Pay through the nose, and carefully identify the entities you accept assistance from. Do your accomplices carry any conflicts of interest? This includes your ISP, and the open source project you've selected as the authors of your tools. Do you need to pay for professional class internet service, including pre-defined static TCP/IP routing across leased lines? Do you need to speak directly with the team that develops your software? Have you considered paying for a proprietary tool, with a service agreement? Is what your doing legal? Do you carry liability insurance, in case damages result from your actions? Do you own life insurance?

If you're confronting an opponent, is the scale of your opponent real, or imaginary? The manner in which you arm yourself for the confrontation will be priced accordingly.

...but the short answer is: obtaining hashes over SSL from a source with a certificate that can be validated by a "trust-worthy" certificate authority is "probably" okay for most ordinary people, who aren't confronting state-sponsored adversaries.

Nope; you can't trust SD cards either, not even if you copied things directly from a host that you trust: http://www.bunniestudios.com/blog/?p=3554

I've become acutely aware of this over the past couple days. I'm setting up a new a laptop, using VMs for all work. Getting VMware is easy - it's signed. But from there? Things start sucking. I need to fix my "ThinkPad" fan and trackpad (new ThinkPads don't actually have a middle button despite the dots appearing like they are one) - gotta download unsigned blobs.

Since I want as little software installed on the host as possible, I'm going to have to start a VM on something like Azure (easiest) with Visual Studio, and build my own copies of these tools if possible. The culture of building stuff on Windows is fairly weak, so I imagine I'll run into all sorts of issues.

It's pretty embarrassing that Windows doesn't ship with a lightweight way of creating "VMs" to increase security. Something like Sandboxie would be a welcome piece of OS functionality.

The JS crypto comment is off-base. The discussion about JS crypto is that it's pointless because it's only as strong as TLS - it doesn't provide anything else, and it's very easy to get it wrong and get more damaged (due to ease of XSS and whatnot). Sandboxed execution is a fantastic thing, and even MS tried that with .NET and it's million code-access-security policies. And now everyone does that with Android/Windows Store style permissions (although not as fine grained).

As much I understand, even HTTPS and its infrastructure has plenty of holes.

How was this, that some people broke into a signature authority and stole master-keys -- so a huge number of keys where compromised. I don't know, if that thing was repaired yet. Also there exist many authorities that give keys to people without the simplest identity check. Such keys are a security risk of its own.

I also don't know, how good (or bad) the key withdrawal mechanism is working currently. I remember darkly (I am not current in these things) that there existed some problems with existing browsers, infrastructure and so on ...

And even, when those things would work fine ... as much I know, there exist holes in the implementation, depending which algo combination is used.

So there are so many attack vectors, that even in the best case (https works fine and you have a domain that belongs to the correct author ... and you have checksums ... and you check, if your browser tells you, that the certificate is perfect (who in the internet age cares, when the browser says that the certificate has some problem??) ...) there seems to be no security in the internet age ....

(And I am not even speaking or thinking about governments spying on us all)

So basically he does a web search for "Windows ssh client" (generic seo spammed terms) when he knows he wants putty (specific) and is surprised that the official putty page is not the #1 hit.

I'd hardly call that a bulletproof argument.

>> Note that, suddenly, Web Crypto is starting to look damn good

OK so we can also boot Linux in a browser, if you stick with it apparently you can do just about anything in JavaScript if you're willing to spend the CPU cycles to do it.

Why? ChromeBook as an example, why move everything into the browser so that the OS is minimized or even removed, you're still going to face the same software problems.

He addresses that. The browser has a different security model than the OS.

The OS's model is based off of the user being the unit of security. If a user runs a piece of software, that software can interact with all files owned by the user. It can make web requests to anything.

The browser's model has the unit as the webpage, not the user. Each webpage is sandboxed from others. If one webpage is malicious, in theory it cannot modify users files or even other webpages.

The difference in model makes a malicious webpage significantly less scary than a malicious program.

Your example, of running your whole OS in the browser, is unrealistic; in reality you'll be running each piece of the OS in a different isolated tab.

This model can work since the web was built for each site to be independent and self-contained... We've already gone too far down the rabbit-hole of native programs being extremely powerful to easily fix that.

The OS might not be lost though. You can run scary software in a VM. You can run each program in a separate chroot. Perhaps soon you could just spin up an lxc (with docker perhaps) for each different program you want to run. These methods of running software all basically transform the OS into using the browser's model.

It's also worth mentioning that the browser model has inherent security flaws for as long as it persists the executable on external servers; you have to find a trusted channel to access the data everytime whereas the program only has to be verified once after downloading.

>> The browser's model has the unit as the webpage, not the user.

The user loses control, the user loses.

>> This model can work since the web was built for each site to be independent and self-contained...

https://developer.chrome.com/extensions/samples "Content Script Cross-Domain XMLHttpRequest Example"

>> whereas the program only has to be verified once after downloading.

Opening the door to hackers who find ways to infiltrate that program _after_ that check has been done.

A challenge: what would the best remedies to this situation be? Should we be pressuring OSes to come with PGP ware and other basic tools by default, for instance?

Signatures all the way down :( And reliable methods to verify those signatures without MITM attacks.

Yay someone agrees TPM's can be a good thing!


Some people do; mjg59 is working on boot attestation as an anti-boot-sector-virus mechanism. The underlying absolutely critical question is who controls the TPM hardware.

I absolutely feel that way personally, I just get frustrated by the common response from people on HN etc. TPM's are the most secure / most mature way to verify your software integrity.

Well, in my case knowing that I went to college with Simon, Richard et al, and could most likely arrange a face to face meeting / key exchange at the next Debian barbecue (hi Steve!) ... Of course, there are still plenty of things to be paranoid about (if I were the faceless security agency I know whose parties I'd have been going to 20 years ago)

Write some new spec and prefix the name with "Web". Web stuff is awesome alright? That's the solution, so simple.

It will take a combination of things.

Sandboxing applications would be a good start. That way you don't need to trust application developers.

But surely you must trust your SSH client, and thus its developers.

Right now it is pretty much a binary choice - trust and install. Don't trust and don't install. I trust the developer to have implemented the SSH protocol correctly -- I have to trust them that much. But I don't see why I should have trust them enough to give them full permissions to my machine. Sandboxing (with permissions) would allow the application to run and access a port, nothing else.

An official app store? If well-run, it could be a good answer.

The title is missing a word: it should be "Downloading Windows Software Safely Is Nearly Impossible". Similar remarks would apply to OS X for any software not supplied by Apple. Fortunately, Linux distros have package managers.

I prefer to search on Wikipedia, it has a link to the official website in a predictable way.

e.g. http://en.wikipedia.org/wiki/PuTTY points to Putty's official website: http://www.chiark.greenend.org.uk/~sgtatham/putty

Putty is open source (MIT), one could build it from source and even audit the code. Nevertheless, thanks for pointing it out.

DuckDuckGo also flags results as "Official Site," which is derived from Wikipedia, as you can see here: https://duckduckgo.com/PuTTY

But not if you go to https://duckduckgo.com/putty

Do you first check the history to make sure that nobody has inserted a malicious edit right before you visited the page?

"It’s currently owned by someone named “denis bider”, who presumably just likes to domain-squat on other people’s product names and provide links. "

Another slam against squatters as usual. I really really wish people would stop with that already.

Whoever Denis Bider is he has no obligation to even put up links to putty. He could sell the domain name maybe even to these people who don't appear to be "using" (by the HN and generally acceptable definition of "using"). In other words http://putty.com/

For the last time. There is no requirement to use a domain name and there never has been a requirement to use a domain name. And there are many people and companies who just sit on names and don't want to sell (because they don't need the money).

Talk to google about duck.com and see if you can buy it. You won't be able to.

Anyway he could put up a webpage as his personal blog or any number of things.

Just because you happen to have a product using a particular name does't mean you own that name in every tld (.com .net .org .info .us .biz and so on).

.org isn't even .com nor as desirable except perhaps for non profits.

I'm feeling this severely with our build tools at the moment. I use Maven to build all of my java projects. Maven will pull down library dependencies from the Maven central repository or other independent repos that you may have configured. I noticed recently that none of my Maven clients were validating checksums on the libraries that we pull down.

This came about when the domain for codehaus.com expired and it transferred over to a parked site that responded to all requests with advertising. I ended up with a bunch of HTML files where I was expecting library jars. In this case it was merely annoying and caused some tests and builds to fail. If they had instead been providing malicious code that almost looked like legit libraries it could have gone un-noticed for a long time.

These concerns are remarkably similar to my recent experience[0] with the Apple software update, which nobody on StackExchange seems interested in answering. I'm still very much interested in educated opinions on that matter, if anyone cares to take a look. I'd be particularly grateful if someone with knowledge of TCP could explain to me whether or not all those duplicate ACKs are of concern. (Note that I understand the question's assertions on code signing may not be correct)

  [0]: http://security.stackexchange.com/questions/52357/what-is-going-on-with-my-download-of-the-recent-apple-security-update

There is absolutely nothing wrong with HTTP. You are supposed to verify signing keys after you download them anyway, regardless of your source and tranfer method.

Yes, that may often be hard, or nearly impossible. WOT sadly often only works for people you can personally verify anyway.

(With HTTPS, you better wish the author chose a reputable and more expensive certificate authority which can be trusted not to give certificates without proper proof of address ownership. Otherwise, verifying the website certificate may be as hard has verifying personal keys.)

Anyone else think it's kind of silly that he's a mac guy (all of his screenshots are of old OS X) and his example is downloading PuTTY? Recent OS X versions all come with ssh client.

First sentence:

> Let’s say you have a brand-new Windows laptop and you’re just oh, so happy.

It's a hypothetical situation (but one which windows users will likely encounter if they want to use ssh)

I've looked around, and the only free SSH tool for Windows that has a single HTTPS mirror is 'kitty': https://www.wuala.com/9bis.com/public/build/

There's binary OpenSSH releases for Windows, but they're all hosted on sites that don't do HTTPS. It seems like all Windows free software has a general lack of following security best practices when releasing or mirroring software.

  Downloading *Putty* Safely Is Nearly Impossible.

It has always seemed strange to me that putty which is still probably the most used ssh client for windows is available through such strange distribution methods. I wholeheartedly appreciate the time the author took to rant-ishly dissect this to a most myopic level. Even though it may reveal a most tortured and disturbed psyche.

Step 18 is probably the inevitable step that follows thinking about something too much.

You could download the installer and notice that it is signed by Simon and then feel secure. But writing a long rant works too.

Is the author seriously advertising google chrome as a safe way to download software ? The very browser known to exist to increase the reach of google surveillance of what people do on the web doesn't seem to possibly be part of a solution here.

Those who give up privacy expecting to gain download safety will lose both.

I noticed this the other day as well. I was trying to download GnuPG. GnuPG.org, including the download page and checksums, is served entirely over http.

Even if it is open source, am I expected to pore over thousands of lines of code to verify that it hasn't been compromised?

Now let's talk about how we are supposed to sign webApps in a way similaire to debian package distribution as to be able to actually trust one you d/l online and be able to trace the update you might receive by revisiting the page?

I wrote a blog article covering many of these issues. I am no Bruce Shneier, but I think there are good solutions:


This reminds me "The Ken Thompson Hack" http://c2.com/cgi/wiki?TheKenThompsonHack

Flash Player updates are offered for download over insecure HTTP. Meanwhile you can't run Flash until you install the update (I assume it was a security fix).


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact