> Then we found Dropbear, a very small SSH program, that you can run via the initial ramdisk (initramfs). This means we are able to allow external connections via SSH. We don’t have to fly to Iceland to boot our server, yeah!
Or you can use Mandos to be able to sleep through a reboot: https://www.recompile.se/mandos
Disclosure: I am the co-author of Mandos.
edit: blue men*. Forgot that green police uniforms are an outdated local thing.
I’d say there’s no municipal police force in the country that has a sufficiently technically sophisticated regular law enforcement to try to attack a running machine. So if you are suspected of having information relevant to a murder or drug deal—the seize and examine one month later scenario is extremely likely.
A handful of states and maybe one or two municipalities may have a computer crimes division that would be sophisticated enough to know to worry about encrypted disks, but they’d have to hire expensive outside consultants to carry out the actual attack and would do so in exceptional circumstances rather than routinely.
Finally, if your threat model includes the national security apparatus then, yes, this is probably too much of a compromise.
I wouldnt assume that this is still the case, at least in Germany. I think the days of police shutting down the devices they have a search warrant for are pretty much over until they are confident enough that there isnt a disc encryption in place. If there is, they keep the device from going to sleep and call for an expert. If they are actually targeting server infrastructure, there will be people with technical expertise involved.
If they were the first to respond to a tech crime, they could probably create a mess. But in theory, they could do their own search warrants, searches, recommending prosecutions, etc.
1. I have yet to see any indication that this is the case.
2. If you assume such technical skill in your adversaries, you must figure that they could just as well open your servers and read the file system keys right off the memory by running wires to the memory bus. In which case Mandos is no worse than no Mandos.
>1. I have yet to see any indication that this is the case.
Taking memorydumps of running systems have been state of the art for about 10 years. There is a heap of commercial forensic software which are used for extracting everything from bitlocker to back then truecrypt. In 2014 it was published that the German police used ElcomSoft as well as Passware, both capable of extracting keys from running machines. I doubt you even need experts for that anymore. Police officers were already on alert to prevent you from shutting down your devices years ago in cases where they assume that there might be evidence on them. I mean we are living in a world where the police is cloning smartphones in routine controls. And on the more sophisticated site, where the police is allowed to infect suspects of even low level drug dealing with trojans. The police isnt stuck in the 90s and they are aided by a rather large sector producing forensic software as well as spyware.
I understand that your product offers great convenience, however its only reasonable to point out, that the need to put in your password after the device was shut off is not a problem but working as intended. Encryption will only protect you if the device is turned off. Circumventing this is not necessarily a good idea.
edit: You also dont have to assume law enforcement. The tools are also affordable for private entities and marketed towards for example towards private detectives.
You are, in effect, arguing about whether encrypted disks are useful at all, which is a debate I’m really not getting into at this time, since it has nothing to do with Mandos.
No at all, encrypted disks are very useful, you just cant show anyone the content.
The sole purpose of encrypted disks is to prevent somebody else from reading them once they are turned off. Your product turns them back on and makes it readable again without your interaction. Turning off those devices is however the only protection you had. So now you also need to turn off a second server.
And once someone has access to your running machine he can read it. Be it with simply copying the files or getting the key via some software with a big "HACK" button. https://www.elcomsoft.com/efdd.html
I’m not sure I follow. If I understand you correctly, you want to have a panic button to press in case of physical intruders, and without Mandos, the power button of a server served this function, and your’re worried that Mandos makes you lose this button? Not to worry. The simplest solution is to have two local servers be the Mandos server for each other, each enabling the other to reboot unattended, and the panic button would be the power button on the power strip powering both servers. Once both servers are off, the system is effectively locked.
If you insist on having a remote Mandos server (which is not really the use case it was made for, but it is supported), you could always automate some button to signal the remote server to disable (or outright remove) the client, thereby denying all access to the secret password. The Mandos server process is controllable via D-Bus, so any program can be made to signal the Mandos server in this way.
With a programmable power strip you could even have a single button doing both things. So there’s your panic button back.
> And once someone has access to your running machine he can read it.
That has nothing to do with Mandos. Anyone with physical access to a running machine can already, in theory, access the memory and get the encryption key. Mandos introduces no additional threat from a theoretical sophisticated attacker.
One is the scenarios where both machines are turned off during theft or fishing expeditions from legal enforcement. In this case running encryption, any encryption, is better than no encryption, and Mandos allow the administrator to install it once and forget.
I take it that your concern is that someone has turned off a machine and forgot the Mandos server, and then a time later a attacker has capabilities to break into the encrypted running machine through unrelated vectors. Mandos do not protect against attackers that has capability for breaking full disk encryption on running machines.
The threat model where we assume the attacker can break into an running machine that is encrypted looks very different. Devices that has mitigation against this tend to have in my experience things like movement sensors, light sensors and GPS, with plenty of kill switches that fry the internals if anything look at it funny. Those kind of threat models are fun, and I enjoy talking to those kind of engineers that work on such servers, and to them I don't recommend Mandos.
Where I do recommend Mandos is to the administrator that for reasons are not physical near the server that they maintain. Maybe they work at multiple buildings or travel. They have servers with sensitive services which won't be turned off for long periods but still need reboots once in a while, and yet goes unprotected because there is no one available to write in a password at the physical location.
A other scenario is when people want to treat a location as safe and want protection in transit. In that setup the client can be treated as any unencrypted device for all purposes except when taken outside the local network, at which point it behave like an encrypted device where the person carrying might not even have access to the password. In theory this makes for a great method to transport a work laptop between offices in different countries, booted into an throwaway operative system.
As with any security, thinking hard about the threat model is essential. What are you protecting, who is the attacker, and what are the risks.
Disclosure: I am also a co-author of Mandos.
Is there much of a plug-in ecosystem for this to support various auth backends?
I boot my machines and then ssh into them with a script that pipes a local gpg -d to them to unlock their disks, mount appropriate filesystems, and start necessary daemons. Unless the daemons are poorly-behaved, no data that is specific to me aside from the host's SSH key is ever written to the root - only the OS/distro files.
I thought this is what everyone does.
Not that I'm some professional security researcher or anything, but congratulations, color me impressed.
EasyPrivacy [...] including web bugs, tracking scripts and information collectors"
Simpleanalytics is all of the above. There is no arguing this.
In their page they say (w/r/t Google Analytics) "you maybe use the anonymization feature. This is a step in the right direction, but the sad reality is Google can still collect all the information" -- you would have to trust them too [Simple Analytics] to not collect, save, or delete-accordingly all the information sent to them.
It is a third party information collector, period. It tracks visits, it tracks pageviews, at minimum. I am not against them as a company or product, but I believe that easyprivacy is 100% correct in blocking their domain.
Therefore I expect my blocker to block a 3rd party tracker which by definition has the capability to track me across websites. It makes that kind of tracking technically infeasible which is the level of protection I want.
I already provide all the necessary data in my request to the first party server. It’s all in the server logs.
End of the day, at least in my opinion, it is acceptable to be added to blocklists, and it is acceptable to not use those blocklists and hit the beacon.
Yes there is. SimpleAnalytics claim that they "don’t track visitors of our customers’ websites".
Are they lying? If not, how is it a tracking script?
Otherwise, why do they need your browser to make a request in the first place?
FWIW I agree with dylz. It is crazy to suggest that any company doing analytics has any sort of right to make my software connect to its servers.
If Easylist did not block all unnecessary requests, I would find a list that did.
No analytics company has a right to make you connect to its servers. But, that isn't the case at all: You visit some website. The operator of that website has contracted with the analytics company. You ask the website for some information and it replies with that information as well as a request to ping the analytics company that the website contracted with. It's apparently "crazy" to comply with that request, all while still consuming the information that you asked for and received.
Ad-blockers wouldn't be controversial if they worked by navigating away from any site that displayed ads or included a tracker. Of course, that would be inconvenient for the user, so, they don't do that. The whole point of an ad-blocker is to allow a user to consume a service in a manner that the entity running and paying for the service didn't intend. We could at least acknowledge that. Your position, however, seems to be that doing what the service you are using is asking you to do in exchange for its information is "crazy" - and that is straight up ridiculous.
A browser is, after all, a user-agent. The web was designed this way for a reason.
In any case, a site server logs should provide ample information for analytics, unless they are measuring mouse movements or scrolling etc.
This is a war; a never-ending war of users VS adtech. I realize how much defense I have to run to protect mtself even moderately, and I'm still losing.
Adblockers are not controversial. Stop pushing adtech narrative and passing it off as an established fact. HN is the last place on earth where people would fall for this kind of crap.
The internet was built to decentralise and make information globally available. It wasn't built for some morally bankrupt interest groups to turn a profit. The http protocol is explicitly in favour of user control over content.
Ad blockers are not controversial, even if the ad industry would like them to be.
It's the other way around. The HTTP protocol and supporting web standards were designed from grounds-up to allow and encourage the kind of things an ad-blocker does. The browser acts on behalf of the user (hence the term "user agent"). Links in a HTML response are information, "there is something related over there", not a command "you must go there".
The right way to solve ad-blockers is for sites to comply with the protocol they're operating under - to refuse delivering a resource, with 402 or 403 code, until payment is provided, or an ad is displayed. AKA "the paywall". The controversy only exists because many website operators prefer to be dishonest and manipulative - they post content that they mark as free giveaway, but simultaneously demand compensation. They stir up drama of how ad-blockers are immoral, whereas in reality, blocking ads is "playing by the rules" and it's them who are in violation of human decency.
> Your position, however, seems to be that doing what the service you are using is asking you to do in exchange for its information is "crazy" - and that is straight up ridiculous.
It's not crazy. But it's also not required by any technology, law or custom. Thing is, the service is asking the wrong way. HTTP protocol was created with means for asking to do something. Like, by responding with 402 Payment Required or 403 Forbidden and some instructions on what you want the user to do, instead of responding with 200 OK + content + guilt-tripping popups and pretending to be victim in news articles.
 - Want the "command mode" web? Invent your own, DRMed one. Because otherwise what you're doing is, again, trying to have your cake and eat it too - putting your content on the public web to gain free audience, and then refusing to play by public web's rules.
It's kind of funny that that was, originally, the very definition of "hacking", and we are in a site called "hacker news". But moving on...
The adtech industry got in its head that it can just arrive in a place that already existed (the Internet) and start inventing implicit contracts. It reminds me of the "nice guy" that does some nice gesture for a woman he likes and then gets outraged when the "implicit contract" for sex is not honored.
Hosting is not even that expensive. You know what's expensive? "SEO", buying traffic, in short polluting the very environment where you exist. That is the sort of thing many companies will do with their ad money. They do what they can to place themselves between me and the stuff I want to talk/read about, and then act all outraged when I take counter-measures against their spying and manipulation attempts. That's rich.
I hope EasyList continues to do its job of acting on behalf of the users and does not get swayed by bullshit.
They often bend over though. I don't remember specific one, maybe amp-analytics.google.com. Also removed some anti-adblock thing after DCMA complaint.
You have to use several lists, like uBO does.
"Tracking" has never been very well defined. It just means something like, "collecting data we don't think you should collect."
If you want something more specific, lots of conflicting definitions do exist, but nothing that people can agree on.
A “user tracker” sets cookies and follows a user around, knowing what the user have or have not done before.
A “site tracker” (bad wording I know) follows what happens to a site, what device types accesses it etc.
Sure, making direct requests to the analytics servers exposes IP and possibility to somewhat track the user. Tracking based on IP is pretty inefficient though, due to shared IP-addresses, IP-address changes when jumping between networks, VPN:s.
Ideally SimpleAnalytics would provide a tiny proxy/washing script that one can host on ones own server and which one includes on ones site rather than including it directly from SimpleAnalytics servers. That way one can guarantee ones users that no IP-address is leaked to a third party (which would also make a Data Processing Agreement completely unnecessary from a GDPR perspective)
1. I would be interested to find out if being geographically between San Francisco and Amsterdam is actually good for latency.
2. I think the usual solution to having customers in the US and Europe and wanting to keep latency down, is to setup servers in the US and Europe. So, this strikes me as an odd justification of the decision.
From Palo Alto : round-trip min/avg/max/stddev = 183.514/184.035/184.898/0.466 ms
From Amsterdam : round-trip min/avg/max = 36/37/39 ms
From London : round-trip min/avg/max = 48/48/50 ms
Obviously this can vary quite a bit depending on what peering you have at your disposal.
For reference, from Palo Alto to London:
round-trip min/avg/max/stddev = 156.236/156.921/158.271/0.834 ms
For latency, I would choose NY/NJ or similar if you really want one location to serve both EU and US; Though this isn't ideal, it does lower the latency to West Coast US quite a bit.
From San Jose to Newark :
min/avg/max/stddev = 73.462/73.547/73.660/0.078 ms
From London to Newark :
min/avg/max/stddev = 72.836/74.323/75.600/1.200 ms
The submarine cable map shows one cable going west to greenland and then Canada, one that goes to northern England, and two to Denmark. The Canadian landing isn't very close to any other transatlantic cables, so it may not be very well connected (land connectivity is much harder to map though).
Anyway, making location decisions based on assuming internet distance is similar to physical distance is kind of silly. There are many physically adjacent countries that don't have direct interconnects, it's pretty common for traffic to exchange through somewhere much farther away than the ultimate destination: many south american countries exchange traffic in Miami.
also the FBI has merely made suggestions to Reykjavik Police to infiltrate servers and they did
> After the initial revelation of the server's location in a data center in Reykjavik, Iceland, the filing explains that Reykjavik police accessed and secretly copied the server's data. As agents of a foreign government, the prosecution argues, they weren't required to seek a warrant from any US authority.
The owner of those servers is in US prison right now serving a double life sentence
Iceland's domestic laws didn't help here, and the lack of a formal arrangement with cross-sharing intelligence communities didn't help here
So you are either implementing a zero-knowledge service to begin with, or wasting your time
If the server uses full-disk encryption, and if it's well locked down, it would be nontrivial to secretly access and copy the server's data.
I mean, adversaries could attach a keyboard and monitor, but they couldn't log in. And you can even delete the root password, and allow only key-based login via SSH.
> If the server uses full-disk encryption, and if it's well locked down, it would be nontrivial to secretly access and copy the server's data.
OP's article mentions this, part of the reason they move out of the US is because RAM can be trivially read even if full disk encryption is used. Reading RAM still works in Iceland.
> RAM can be trivially read even if full disk encryption is used
I wouldn't say "trivially", but yes, it can be. But if you're that paranoid, you can embed key parts of the motherboard in alumina-filled epoxy. Its thermal conductivity is good, and you can add fins and fans as needed. You can even embed trip wires in the epoxy, to trigger system shutdown if tampered with.
A lot of this seems like security theater, especially while still hosted behind Cloudflare.
Yeah, here it does seem security theater.
But still, it was a good writeup. I mean, dropbear and all.
I have no clue why they're using VPS, after all that. I mean, if they're a real business, they ought to just setup a server, and ship it to Iceland. If the want the ease of VPS, it's easy to do secure KVM in a FDE server. Even with Docker containers within KVM, if you like.
But what I mainly meant is that a "real business" can afford to build secure servers, ship them to Iceland, and send trusted staff to set up and configure them.
You probably know this, but anyway. If you're setting up FDE with dropbear on a remote server, it's best to build the installer on the machine.
I dig around. So to remotely unlock LUKS, on bare-metal in datacenter, one can use custom initramfs, like https://github.com/mtth-bfft/dracut-dropbear-unlock
But still "boot time SSH server's key is stored unencrypted, man-in-the-middle attacks can also be carried out to recover the encryption key at boot time". Ideas?
> Full-disk encryption doesn't protect you against someone with physical access to the machine: the encryption key can be recovered from RAM at runtime (e.g. cold boot attacks), and since the boot time SSH server's key is stored unencrypted, man-in-the-middle attacks can also be carried out to recover the encryption key at boot time. Consider this in your threat model.
So yes, for "Evil mastermind, supervillains level" you gotta embed the RAM, and everything that would give adversaries physical access to the RAM, in alumina-filled epoxy. Or better, alumina-and-fiberglass-filled epoxy. The alumina gives you heat condictivity, and the fiberglass strength. Plus embedded trip wires to nuke the board if they're cut.
It's also prudent to disconnect all ports from the board, except for the NIC, and embed everything that could be used to reconnect them.
With that, it'd be very difficult to get anything from RAM. Attempts at physical access would destroy the system.
But I wonder if there's a way to reproducibly generate SSH key pairs from hardware IDs. That way, there'd be no keys stored in /boot, and they'd be generated at each boot. So only your hardware would have the right keys.
If this thing works, there is no need for remote ssh FDE unlocking.
And this could also work via dropbear. There'd still be an exposed dropbear SSH key in /boot. But it'd be the TPM that you're unlocking. And I assume that there are ways to verify the authenticity of the TPM via initramfs, before providing a password.
Am I understanding that right? It does seem more complicated. Especially given that you don't have physical access to the server.
Yup. Nope. The key is protected by TPM magic :). TPM measures stuff, then we put the key in TPM and seal it to PCRs 0-13 measurements (whole boot environment). At boot, TPM will allow tcsd daemon (part of initrd) to read LUKS key once, and only if all measurements match, this is as far as I got.
End result is unlocked volume, without any passphrase prompts, neither on console nor via ssh.
like some software that floods all the RAM and memory caches with erroneous data, and then the hardware solution of the epoxy trip wires.
For the trip wires thing, you can just have them apply 5VDC to the wrong RAM pins. And then, if you wanted to recover the motherboard, you'd need to chip out stuff, and replace it. But it's probably only the CPU that'd be worth recovering. And maybe you'd want to embed that too, just in case.
Which leads back to the following quote from the author:
> But what happens if we block alternatives even if those alternatives are taking the privacy of the user very serious. I care about the privacy of the individual. I don't collect any personal information (I don't even store IP's). Even if you have your Do Not Track-setting turned on in the browser I do not collect any information (see our script).
There is no country on Earth that will meet your requirements for hosting. For example, if you host in Russia, you probably can evade the US government's prying eyes, but then you have to deal with the Russian government's prying eyes.
I strongly doubt Digital Ocean would give customer data to anyone without a warrant from a judge. And there could be some scenarios I'm not thinking about, but I also doubt a judge would ever grant a warrant to collect bulk analytics data. And I think it would be unlikely that law enforcement would want to request a warrant just for some narrow analytics collected on a few specific individuals.
Also, as others have pointed out, you actually make yourself totally fair game for US intelligence agencies by being in a non-FVEY country, and even more so because Iceland's ISPs peer with ones in FVEY countries.
But way more importantly than all of that, the threat model is wrong. You're likely at far greater risk from cybercriminals and regular blackhats than you are from any government. Digital Ocean (very much unlike Linode and some other big providers) has never had a (known) breach, and probably invests way more into security than the Icelandic provider you switched to does. DO likely has many world class security engineers employed; maybe your Icelandic provider does, too, but it's less likely.
And this isn't even going into the added management and latency issues.
I feel like you're kind of handicapping yourself without any significant privacy gain or increase in customer acquisition. You're getting feedback from very suspicious people who want to block all use of your, and others', services. You should take feedback from such a group with a heavy grain of salt.
This also does nothing to actually get the domain removed from the block list - there is probably nothing you can do there, other than gray hat stuff like constantly rotating domains and IPs, or pivoting and changing your entire business model and company.
A subpeona from a grand jury or an NSL would be more than sufficient.
The email isn't actually deleted - Stripe's logs will permanently hold that data. If you don't want to retain the email address, you'll need to send receipts etc yourself.
If the service keeps growing they will quickly realize that latency and availability matters. Uptime would be now a major concern. If something blows up in whatever facility it's used for these servers they will be fucked. No way to shift traffic to another server.
And let's not even talk about computing needs. At scale, that matters. Having managed services also matters and helps reduce operational cost (a lot). Having a bare server in someone's garage in Reykjavik is the opposite of scaling. It's literally a recipe to deprive yourself from the technologies that can make you successful and definitely a way to slow down your progress.
If SimpleAnalytics' goal is to stay small, then maybe this could make sense, but that for me makes this business pointless and it doesn't sound like the vision the founder has either.
Or maybe this is just an MVP until they can have their own on-prem infrastructure with fully encrypted hardware. Idk, maybe the founder's vision go as far as that, and I can't see beyond the downsides of a such a random move.
In theory, the hosting provider, having physical access, has both the key (in RAM) and the cyphertext on disk, so logically there is little point.
It’s worth noting that my hosted bare metal boxes have encrypted data partitions with keys I provide over ssh each boot.
Of course, if it is a VM, anyone with root on the hypervisor (i.e. the hosting company) can trivially dump the memory and encryption keys.
Someone physically attacking a running computer is a different, and much harder, threat model.
This is why companies employing fde will also require laptops to be shutdown before being moved.
I'm browsing to med.com/somedisease and now CF links my browser to the URL from Referer header.
> [...] by using the referrerpolicy attribute on <a>, <area>, <img>, <iframe>, or <link> elements
Hm, seems not to work for scripts.
Thank you for existing.
Also makes it easier to hire people if you don't have to update any texts ;)
The important thing here is rightful skepticism of companies which gives away valuable complex software (a web analytics tool) for free in return for access to your customers behavioural data which then is sold or used in other parts of their business (ie. Google Analytics/Google Adwords).
Making this a thing about the mass surveillance of the five eyes and thus having to avoid hosting companies such as Digital Ocean and AWS just looses a lot of us and frankly is a bit too paranoid/silly.
You guys in USA might not hear a lot about it but this is a business concern in a no small amount of countries around the world. Especially Europe (or I might think so because I'm there; selection and confirmation bias are always a possibility).