But this attack is nasty - even one follows all of these security measures I mentioned - it provides no protection. The attackers are not targeting security researchers personally but only their exploits. If you only have a single "untrusted" experimental system for reverse-engineering and testing all your exploits, once it's compromised, everything can be stolen. A better compartmentalized environment is possible (e.g. one for all untrusted files from the web, one for personal development, and disallow the exchange of any non-text files between them), but it's an order-of-magnitude more difficult to use.
Like so many people, James is pretty confident that anything he doesn't understand (including apparently elliptic curve cryptography) is probably unimportant, and that the solution to his pressing problems is just to make something he knows isn't possible easy (remembering a separate strong random password for every site) so the people who are working on stuff James doesn't understand ought to work on that instead.
This piece was written, I think, slightly before BCP 188 ("Pervasive Monitoring Is an Attack") but to me it feels as though that's the answer to it. Yes, the NSA (or Mossad, but realistically the NSA) could definitely win if that's what it came down to, you or them. But that's very rarely the situation. Their budget, though large, is finite, and your value, even if large, is also finite. If snooping every word said on the telephone by an American costs 5¢ per citizen, why wouldn't the NSA do it? Worth a shot. But if it costs $5000 per citizen that's gonna blow their budget, and for what? So that's what BCP 188 is about, the question isn't whether you're dealing with "Mossad or not-Mossad" it's whether you are the Protagonist or just another extra. We can't make it impossible for a sophisticated and resourceful adversary to succeed, but we can make it very expensive so that they are obliged to choose their shots.
The end result is that they split the type of surveillance between "cheap" blanket surveillance, and targeted surveillance for the targets that are deemed valuable enough, while also striving to drive the "per target" price down.
Mass surveillance offers a good opportunity for economy of scales, and gives you a very granular estimate of how valuable a particular target is.
I can't see a way to interpret this that doesn't come back to, fix passwords and stop bothering with this other stuff. In some forms (e.g. satire) you are supposed to sneak in an actual point you wanted to make (e.g. Swift's "Modest Proposal" lists the things Swift thinks would actually work, pretending to dismiss them as inferior to eating babies). But I believe in Burlesque it is considered satisfactory just to point and laugh. I didn't laugh, maybe that's on me.
One of these articles proposes that the problem with smartphones is that they aren't very good phones. In this "satirical" form it proposes a pyramid shaped "hierarchy of needs" for phones with "Make phone calls" as the most important element at the bottom.
Perhaps in 2014 that felt like an insight, to James Mickens or to his readers. I don't think so, but maybe 2014 is longer ago than I think it is, and maybe nobody had noticed back then that (and I apologise if this is an amazing insight to you now):
Calling them phones was an excuse. People aren't very good at figuring out what they actually want, so telling people we're going to offer them Network capable handheld computers wouldn't work, they don't realise they want those. So you say these are "phones" and then let them gradually figure out that actually they have never wanted to make a telephone call in their life but they did want a handheld computer to access the Network.
The form factor makes no sense for a phone. Clearly a rectangular sheet of glass isn't the right shape for a phone. But it is a good shape for a handheld computer. Which, again, is what you actually wanted anyway.
Why would anyone run that code outside a sandbox? it is unclear. even more so a security researcher.
Come on, it is a single click to launch a Windows sandbox... was the source extremely trustworthy? fine, that would be the same as giving your password to your spouse.
People have been using testbeds with dedicated hardware since day 1 of computing.
If you always assume your data is at risk, and that your data can become your person, you'll take care of it. In the modern era, we're almost post-goods. I couldn't care less if someone broke into my house and stole my TV/monitors/stuff - and it's happened. But the last thing I'd want stolen is data I hold near and dear.
The video will show how your data can be suddenly destroyed if all you have are digital backups.
Seeing this video led me to buy a laser printer so I can have paper backups: http://ollydbg.de/Paperbak/
I mean I guess it just comes down to priorities. I can't think of anything I want to keep so badly I'd bother with that.
All this important data (the directory I keep my git repos) is less than 90mb in size. With 500kb/page I could store everything in about 180 pages (90 if I print on both sides) which is not that much.
Since I store backup of these git repos in a ZFS dataset, I can do incremental backups (with zfs send -I) every 6 months or so. By my estimates, I would be adding at most 1-2 pages per year.
Also, it is not like I expect to ever need to read that back in. I also keep redundant backups of everything in multiple cloud providers and in external HDs. Using these papers would be a last resort if everything else fails.
In that world, would you really want this data back so badly that you're going to scan it all in from a giant pile of paper?
Again, I'm not saying that your data isn't actually that important to you, I'm just personally having a hard time imagining any data is or ever will be that important to me.
I can't answer what I would do if the world was in that state, but I'd rather have the ability to restore my important backup than not.
I might be wrong since I haven't done it yet, but creating the paper backup doesn't seem like a huge time investment.
The disastrous consequences are all in the area of losing the power grid itself.
When the power subsequently went out for two days, my parents got quite tired of me telling them it was an alien invasion.
But how much can one realistically print?
500KB per page isn't much for today's data hoarding standards.
I guess you don't keep images?
BTW I still haven't backed up anything to paper yet, I bought a laser printer last year to follow that path, but ended up not doing it yet :/
I was reading today Wikipedia about Turbo Codes, Viterbi encoding and similar things and it would be interesting to me to read how a very smart person would solve the problem of maximizing data stored on paper with reasonable error correction.
Windows Defender actively interferes with security software development. I've noticed that lots of developers either disable Windows Defender or try to whitelist the development folders. Whitelisting doesn't always work... Windows Defender will block certain behaviors such as token stealing and in-memory attacks.
Also... software developers often have 'test-signing' enabled and all kinds of other security risks that are unique to software development.
The 1980's security paradigm's no longer really work.
Try to install Android Studio (make sure to initialise it!) or Eclipse on your system and tell me where it put its executables. Go and indulge in the horror that your local adb binary can be replaced silently without your knowledge. When most mobile operating systems are designed to isolate components together, we collectively are hindering effective security by allowing these low-hanging fruit on our systems.
Sorry but often I just see countless developers that are not that concerned about security and then ask themselves why are users irresponsible about security.
But anything running as your user account can also steal your ssh keys and gain persistence via cron or .bash_profiles or any of a multitude of other methods. How does a writable ADB binary let an attacker do anything more than they could do before?
I have no answers on Linux sadly (but you can disable user-side cron so that you can ensure that cron entries are not editable), but on Windows you can lock down your system by not allowing to run executables from non-whitelisted directories (AppLock), so you can only execute programs from specific directories. Does not really prevent stupidity ("I accidentally run NotAVirus.exe from our build directory") but you frustrate attackers from running their own executable (especially that most drive-by attacks rely on the Download folder or Temporary folder being executable). Of course, you need to monitor your build directories for harmful executables but you significantly reduce the attackers' footprint into the system. Additionally SSH keys can be stored in a format where you need to have an active password in order to decrypt them.
We need capability-based privilege separation on network-connected machines.
The question is "how"? If you are indeed targeted by state-level attackers, there should be more precautions because you are actively attacked. However, I have stated that "there are low-hanging fruits". I have written up a more extensive reply (https://news.ycombinator.com/item?id=25914440) but the short answer is that you can actually prevent code execution on arbitrary areas on Windows. Upon further research, I now know that there is an analogous equivalent (and maybe competition) for Linux: AppArmor and SELinux. However, I will still see someone replying "there is no solution to this mess" despite having a solution, just unused by developers. It is made harder by other developers who by their choice makes security-conscious developers' lives harder.
> We need capability-based privilege separation on network-connected machines.
Yes, you're correct on that. But abandoning reliable-but-unused security for shiny new thing seems wrong on so many levels.
That's likely because, not unlike the rest of the software industry, a huge number of them are just in it for the money, and have the minimal knowledge required to do the job (i.e. find exploits, get paid).
Interesting. Call me naive, but couldn't someone just investigate on this right now? Like get the link, act as Chrome and debug the whole process from forwarding the link to the executable up to getting compromised?
I am almost willing to bet $$$ that they would be fine if they had JS disabled.
That also reminds me of something I remember reading about many years ago on an RE site: "You hear about all the bugs they fix in each update. You don't hear about all the new ones they introduced."
I am fully willing to bet $$$$ that they would be fine if they had air gapped their computer.
Unfortunately, disabling JS is only marginally more practical than completely foregoing access to the web; and it is only going to get worse as more sites and services rely on JS.
Web devs keep repeating to themselves that no one browses with JS off but you can't see the ones that have it off if all you use to survey is JS itself.
Mid startup hustle it makes sense for people to start ignoring this. But if you adopt a mindset of serving users, you'll quickly see why they went with this approach - progressive enhancement of the site based on the capabilities that work, with a fully functional experience extending from lynx and upwards. 
Give it a week and you’ll find yourself very rarely having to make policy changes. We spend most of our time on just a handful of sites.
Have you tried it, white-listing the sites that actually need it? Works pretty well
I've done it for years and it's fine if you are willing to put in 2 seconds of effort here and there, besides some redirects for online purchases it's mostly painless.
Also no one needs to be making bets here, they were seemingly hit by something that Google still can't identify, from the article:
> the researchers have followed a link on Twitter to a write-up hosted on blog.br0vvnn[.]io, and shortly thereafter, a malicious service was installed on the researcher’s system and an in-memory backdoor would begin beaconing to an actor-owned command and control server. At the time of these visits, the victim systems were running fully patched and up-to-date Windows 10 and Chrome browser versions.
The problem is that you eventually have to enable scripts when you encounter a site that refuses to work with scripts disabled. There are enough of such sites around that most people would instinctively enable scripts whenever they see a broken site, which nullifies any security benefit.
Of course, I believe Google is complicit in this somewhat, since its own intrusive tracking needs JS, and so it encourages everyone to make sites that require it and users to leave it on by default.
> I am almost willing to bet $$$ that they would be fine if they had JS disabled
I would guess that's because you do and that's what you use as a proxy to feel safe about the unsafe web. All of us even slightly aware of the security problems of browsing the web have something (for myself I immediately thought, I should add those domain names to my PiHole, just to be safe), but that's the problem with asymmetry: it costs you a lot to defend against everything; it costs a determined attacker very little in a marginal cost sense to spread their efforts across threat vectors. You're pretty sure disabling JS makes things safe. Do you allow CSS? That's another vector. What about all those cool new HTML5 apis we have now? Heck, it's tough to get developers to think at all about accessibility, what are the odds the developers who have to support that under-appreciated field put in a ton of extra time to make sure the features are fuzz tested? Anyone tested what you can do with the subtitle track to an HTML5 video player?
I bought a copy of The Tangled Web a decade ago and never finished it because the whole thing was just too depressing to really come to grips with and instead I too have uBlock, a PiHole, some other Chrome extensions and a Mac as a sort of double-condom to make me feel safe without making me much safer. We are all the US military, happily stockpiling the wrong assets to fight the last war.
Did you mean “uBlock Origin”, from the original author of uBlock?
The attack vector was spearphishing followed by convincing the target to compile a project, where the makefile contained malware. No JS involved.
>In addition to targeting users via social engineering, we have also observed several cases where researchers have been compromised after visiting the actors’ blog.
Aka, they were compromised after simply visiting a malicious blog. This is why the blog post emphasizes the Chrome Vulnerability Program, and specifically mentions that the victim was running a fully patched Windows 10 machine and fully patched Chrome.
>At the time of these visits, the victim systems were running fully patched and up-to-date Windows 10 and Chrome browser versions.
RCE exploits on web browsers are typically written in JS. I would also bet $$$ that if they had JS disabled, they would not have been compromised.