> The above environment variables were being dumped in the source of the login page on Silk Road. They contained the real IP address of the server.
Second - the FBI considering "fiddling" with "miscellaneous" input characters into the login page not illegal access is good - it means the next person charged under the CFAA for "exceeding authorization" by fuzzing will have a precedent to cite.
edit: To add a third point: If you are hosting a Tor hidden service, do it inside a virtual machine. Put it behind a gateway that acts as an isolating proxy. Clients/users should be doing the same as well as it protects against the malware attacks (even better freeze a VM snapshot and restore it each time you need it). Whonix does this - although it is easy to setup yourself (I use OpenBSD as the gateway, much slimmer than Whonix and a whole lot less going on)
While this is good advice and will help you in the event of your server accidentally leaking debugging/config information, it won't help if someone is able to get the hidden service to make a network request. For example, if they can get it to send an email or check if a page is online, then that email will obviously go through the gateway. And obviously if someone gains arbitrary code execution, they can just Google "my IP" and see the externally facing IP.
Some other useful advice: if you're setting up a Tor hidden service, make sure the HTTP server only accepts requests from the Tor network. The FBI claims that when they put the $_SERVER['SERVER_ADDR'] IP in their browser they got the SR home page right on port 80, which is extremely poor security on SR's part. Someone running a distributed scan of all web servers on the Internet could have found it on their own within a few weeks or less. And in fact, this is becoming more common as a technique to identify origin servers hidden behind reverse proxies and CDNs.
> Put it behind a gateway that acts as an isolating proxy.
You don't NAT - you forward the port required through the gateway machine and to the virtual machine. Either terminate Tor on the gateway and forward the web port, or terminate Tor on the web server and forward the Tor traffic on the gateway.
In that case nothing can request out from the web server. If your server needs to make requests, such as getting the latest bitcoin price - you do that on another server and run a queue that will pull the data over.
I believe that avoiding detection online can be done if you are very strict about your policies.
I don't believe DRP had the knowledge to understand and the how's and why's of a setup you're describe unlike skilled engineers who design multi-layered approaches.
> they didn't believe that will have to eventually face an enemy with so many
> resources (FBI/NSA). I believe that avoiding detection online can be done
> if you are very strict about your policies.
Are there standards for these kind of general pseudo devices in virtual machines? (Or you would use the cut&past buffer for example?)
To my knowledge, secure sites use(d) hardware methods to guarantee a one way channel (snip half an ethernet cable, override peer/auto detection and then use udp.)
So you have your gateway virtual machine which has one interface to the public web and one interface to the internal network. On that network you have two or more machines:
a) is the web server, which can't make any requests out to anywhere except other machines on the same local network. It can receive tor or http traffic in from the gateway
b) other servers hosting local services, which can't make any requests out at all but can receive queries from the web server on their http/queue port. A database server for your web application would also sit on a machine like this.
You can put b) on yet another separate network behind the web server if they don't need to route out over Tor (or even if they do).
Your slicing the tasks up and isolating them on separate machines to minimize the attack surface. Someone breaking into the web server will not be able to make any queries out to the web or to tor, and will only be able to query local services servers in b) such as the queue.
RabbitMQ has an HTTP interface, 0MQ uses TCP.
The other way to do it is to isolate every machine from every other machine and implement a SOA over Tor.
Taking the example of your web server needing to know the latest bitcoin price, you would implement an API on another hidden service and restrict access to it from only your web server. The web server would then have to be allowed to make queries out over Tor, but this can remain on the Tor network and can again be restricted to only your hidden 'proxy' services.
Most people associate Tor and hidden services with sites being slow and down all the time, but that is more a function of the level of experience in the field. Tor hidden sites have a different threat model so the ideal architecture is different to most common network architectures.
The biggest and most obvious threat to a lot of these hidden services are application-layer attacks.
You should not be using platforms that have the ability to do vast inspection of their runtime environment or make arbitrary outbound requests.
I'm not necessarily a fan of the "SOA-over-Tor" approach for something like a Bitcoin price: the explicitly-whitelisted bitcoin-price-checker-service communicating over a small-surface API (0MQ, Rabbit (albeit this is a bit of a larger attack surface, internally)) to another VM that has externally-terminated Tor-only outbound internet access is probably easier to work with.
I should spin up a CoreOS distribution with all guest VM outbound access turned off and try out host-level Tor termination.
By the way, upon re-reading the FBI's full report, it looks like they may not have found the IP address through the environment variable leak, but rather some of its resources (like its CAPTCHA image) were actually be served from its clearnet IP address.
That makes zero sense.
This is what firewalls are for
For an attack like the one used here (where they found the IP) then there is no difference.
I realize that a significant portion of the internet runs on PHP, but if you are going to do something as high-risk as SR, you should use more robust tools.
The fundamental problem is that no matter what language and server combination you choose, it was never written for hostile environments. Exposing an IP address of an internal server is considered a very low risk vulnerability (as compared to say RCE), so very little work is put into auditing for such risks.
I'd rather have ample documentation on how to harden my PHP application than no documentation on how to harden my Node application. Security through obscurity is no security at all. Plus, many of the mitigation strategies are simply rules like "don't use mysql_query" or "use htmlentities with ENT_QUOTES and UTF-8 to escape your output", both of which can be built into a framework. See: laravel.
 downvoting is much easier than formulating a response, isn't it?
Why not Python, or Go, or even Haskell? There are many languages other than the three you mention which have much better reputations for secure web programming.
You likely got downvoted because you presented a false choice to back up your argument.
* The language itself has a pretty good security track record
* The more popular application frameworks (Django, Flask, et al.) seem to be doing pretty well
* Process isolation techniques are well-known
Its an off hand observation I've made before e.g. https://github.com/paulczar/docker-torbrowser/issues/2
Tor should run in a VM or container, and the browser should run in a container that can only talk to the tor, and the only other way out is a dumb screen scraping display for the browser. Make sense?
A separate $40 ARM box, running some kind of open firewall software (openbsd, m0n0wall, pfsense) that blocks all outgoing traffic that isn't tor is also preferred than trusting say, Virtualbox NAT, which likely leaks host IP to the guest and your VM state to the host.
A bonus to having a dedicated system for whatever secret squirrel things you're doing is it's a physical barrier protecting you from yourself, like when Sabu accidentally logged into IRC in the clear. It's also a good idea to build your own Tails source and hardcode a gigantic password into the startup scripts. This is the "I'm too drunk to be doing this" protection should you ever be faced off your ass and think it's a great idea to go on IRC and taunt the feds or check your sales PMs for the booters and carberp mods you're peddling on not_legaL_hax4sale.ru forums. After 10 tries and failing you'll either pass out or hulk smash the box either way you will not wake up in jail.
I've considered putting pfSense on ARM.
Well, most (all) of the $40 ARM boxes I've seen run the ethernet port off the USB bus... so, an USB ethernet dongle should suffice.
(the USB bandwidth should likely still be faster than your internet connection, but depending on which ARM chip you have, that may become your bottleneck in processing the packets... Beaglebone Black seems like it would suffice though, RPi might be a bottleneck)
Checkout Qubes which was a project that virtualized each app:
Doing that today with Docker containers would be interesting.
 This is one of the main reasons why there are efforts to move it to Chromium
The timing is right since OS X Yosemite will include a Hypervisor - so for the first time ever all major operating systems will support it and we have Docker to make it all portable. No more kernel module installation, which is what has always made these type of solutions hard to roll out to consumers and non-techs.
Is that a type 1 hypervisor that can run a linux VM image/container without vbox/parallels/fusion, as if the VM/container was an "app"?
If you look at the architecture diagrams for vbox/vmware etc. you'll see they all have their own implementations of each library to do this on OS X. This is where the performance difference comes from. With Hypervisor.framework and each of the virtualization firms porting their platforms to it, it means this performance difference mostly disappears and the vendors can focus on improving the higher layers.
When Microsoft did this in Windows, the ecosystem exploded. One difference is that Microsoft built and released tools and apps at the higher layers, such as application virtualization with App-V (where they compete with Citrix and VMWare). It doesn't look like Apple will do that, yet - but what it does mean is that someone can port or implement a common virtualization layer on OS X and bring better, cheaper, more accessible and open app virtualization to OS X (with great performance).
It is a pretty big deal with potentially a lot of interesting opportunities and applications. There isn't enough information available about it yet, though (or I simply haven't found it) - someone with more familiarity with the tech and virtualization in general will likely dump info about it and what it would mean when we get more information about it and are closer to 10.10 going gold (the virtualization co's porting their platforms to support 10.10 will likely be an issue).
edit: to add, the good thing about making Hypervisors built into the OS, having an abstraction layer on top of that, and then on the other end having Docker abstract the tech is that suddenly everything becomes portable and developers can focus on the more value add features (since core virtualization tech becomes commoditized)
Apple would need to document the virtual device interfaces so that Linux/BSD in-guest drivers can be developed by the open-source community. Until then, only OSX-on-OSX would work.
Hopefully the Apple hypervisor can be disabled, e.g. if someone wants to run OS X on VMware on Windows on Mac hardware, or OS X on KVM on Linux on Mac hardware.
Their general opinion is that this makes Tails less secure as now you have to trust both the host and the visualization software.
Provided that you trust Tails itself. The expected way to run it is off of a live cd. This way the trusted OS is only ever in ram, and if you use a non-rewritable disk you can also be assured that the Tails disk itself cannot be modified after its creation.
Tails handles the 'only talking to Tor' via iptables. Unless I am mistaken Tails' firewall will not allow clearnet connections.
The question is, inside the tails itself, should apps like the browser and your mail and chat programs be isolated from each other i.e. placed in "jails", "containers", "zones" or whatever?
Security in depth.
On that note it seems like his lawyers are performing well, they have challenged absolutely everything in pre-trial motions and there have been some interesting rulings (they've lost almost all of them).
Warrantless surveillance of foreign servers plus the domestic "general warrants", "bitcoin isn't money", his hacking charge under the CFAA and now a likely challenge to the server evidence that was found by "manipulating inputs" (which is interesting to contrast to Ulbricht's own hacking charge).
Both the motion to dismiss and the judges ruling are interesting reads:
I'd be interested to know what an experience lawyer or trial specialist thinks of how it has been going so far. I have yet to find a good opinion piece on the topic.
They basically said his attorneys are doing the right thing by going after the warrant stuff. It's pretty standard in legal circles. Go at the base of their argument. If you can get the original warrant through out, it all goes. The only problem is they suspect this is the only real strategy his defense team has right now.
Once the judge rules on this - which my attorney friends believe will be on the FBI's side, his defense team won't have much to pivot on. All the evidence is now compounded and will stick. Which as the article pointed out, is pretty bad for him. It will also confirm his identity as 'Dread Pirate' which a lot of the case hinges on. Once the FBI get that point nailed down, the rest is pretty academic.
The only thing that would make this interesting is if the judge rules against the FBI and hence forth a majority of the FBI's case is destroyed. Both of my friends figured the FBI will drop that case and simply let the federal case on conspiracy to commit murder and hiring a hitman run its course.
Also, keep in mind there are still three other admins that have been indicted. If they get any of those guys to give them more information about Albrecht, it's all down hill from there. They might not even need most of the stuff they entered as evidence if they can get one of these three to roll over on Albrecht.
Pretty much any way you slice it - he's screwed.
This was ultimately to prevent stuff like burglars falling through skylights, then suing the owner of the house they were burgling in civil court while facing criminal prosecution.
I'd be staggered if it was possible to make a relatively weak "authorization" claim stick, seeing as how the police do have the right to investigate things left "in plain view" which you could argue a web server page which you accidentally sent compromising fuzz data too could fall under.
A court would also have little trouble allowing such a thing, since it's a narrow interpretation that doesn't legalize it for the ordinary citizen (though IMO I think it probably should be).
That doesn't make sense to me. You haven't "committed a crime" until you've been convicted.
You seem to be saying that you lose the rights merely by being suspected of a crime.
If that is the case, those rights don't actually exist in the first place.
In the US, they have successfully prosecuted people for "fiddling" just like the FBI did here.
The point here is somewhat similar: trying to sue the FBI for unauthorized access to a server would hinge on the relative standing of that law compared to much more serious offences (i.e. conspiracy to murder being the big one) - since the case would have to come from DPR against the FBI, and would thus be subject I suspect to similar tests of standing.
Other people have made the wider point more thoroughly as well - you'd really struggle to prove wrongdoing when all that was acquired was an IP address.
To answer my own question, I guess you could say weev, but I think his troubles really began when he made the pivot from fiddling to mass scraping. I think it's harder to argue the FBI's access was unauthorized when what they were looking at was the "access is denied" page.
CFAA talks about "unauthorized access", not about any "unwanted" interaction with a computer. Though I think FBI will simply claims that the steps used at that stage of the investigation were implicitly permitted by the fact that it was conducted for a lawful government purpose (there's some fancy legal term for this, but I forget what it is), and wasn't otherwise forbidden to the government since it didn't involve a search or seizure.
If accepted, that would mean that J. Random Hacker doesn't get to mess around with websites just because the FBI got to, even in situations short of CFAA violations.
edit: I guess they could have been providing incorrect passwords for an account that they created. I agree that is easier to call unwanted.
Basically all the analyst said he did was to load the SR website far enough until the captcha popped up, and notice in the wireshark (or equivalent) logs that a non-Tor IP address was reached from the SR website.
If the analyst tooled around on the website after that, then even if the court were to call foul on that subsequent access, the public IP address wasn't derived from tooling around so it (and the evidence derived from that) wouldn't be at risk either.
if i remember correctly an error page leaked an ip in 2012/2013. someone had to realize the captcha was leaking at some point? consider me confused ;)
The word "miscellaneous" to me implies things like quotes, backticks, and similar non-alphanumeric characters. "Fiddling" with them would be attempting to find somewhere that didn't quote an input value correctly: they were attempting to find something like an SQL injection.
It wasn't much of a "brute force" attack, it wasn't SQL injection (though it's possible they were poking at that too), but just the simple question, what happens if we try to login five times with "miscellaneous" passwords? Hey, look, a captcha! I wonder what server the image comes from...
Deciding which way the decision should go must be causing quite a few hours of concentrated legal consideration - there are downsides in both directions for the government.
The Captchas, on the other hand, might have been using an existing software. Remember: These captcha images will have to be autogenerated by a script which, as a convenience to the user, might have used some kind of mechanism to determine "fully qualified" URLs. And this had slipped below the radar, as it's a feature used much less often, and hence likely to receive much less scrutiny.
I think it's pretty likely that these kinds of information leaks can happen when you deal with a larger codebase or system. Hence following the advice of some other HN users, who recommend a strictly firewalled system for this kind of use-case, looks like a prudent thing to do.
People who get caught trying to brute force servers (do people even get caught for this???) are the lowest hanging fruit and are the ones least harmful to society.
My point is precedent doesn't really matter, because realistically, you won't have anyone to actually prosecute except for the 13 year old "hacker" who had no idea what they were doing.
I was wondering about this as well. If this kind of thing is not done pursuant to a search warrant and is in violation of the site's TOS, is it legal for them to do? What happens if I put a blanket ban in the TOS on use of the site by anyone acting as an agent of any law enforcement agency? Certainly with a warrant, it would hold up , but without one, it may not.
Interesting legal question, I guess only time will tell.
Actually, I'm not so sure you don't have that backwards. If a police officer shows up at your door without a warrant, you can tell him "hey, no cops allowed in my house!" and make him leave. But he or she has a warrant, you can deny them all you want, but they're legally entitled to come in and look for stuff.
With a login page, it would work roughly the same way. "No cops allowed" might work, but not once a judge gives the police a right to be where you don't particularly want them.
Unless Ulbricht's lawyers successfully argue that fuzzing constitutes an illegal search. The law respects precedent, and there is more precedent which finds that fuzzing leads to illegal access.
To make it worse, I never install the VM tools on any machine because i've always felt they are all crappy and insecure (and make VM detection easy). Most of the VM drivers try and fix this problem, but it would be best fixed with either virtualization hooks where commends are run in the client by the host whenever you freeze/unfreeze.
 this understanding the pros/cons of each approach and weighing them up. i'd rather have a good known snapshot to start off with each time I use it (and deal with side effects such as time sync, rng seed, handling updates, etc.) than have a long-running machine.
So this is legal when the FBI does it, but when someone else does essentially the same thing on an AT&T server it's identity fraud and conspiracy to access a computer without authorization
I think weev's sentence was utter bullshit, but you can't equate the two things. SR was a drug marketplace, AT&T is not. This is like arguing police shouldn't be able to pick the lock of your drug safehouse door because you aren't allowed to pick other people's locks.
Of course, who watches the watchers? The US government already imprisons people for life without trial or charge, interrogates using the same torture techniques the Soviets used on captured Japanese scientists, and surveils the diplomats of its own allies. It might be, and you know, probably is committing industrial espionage.
At some point you have to raise your head up, see the forest amongst the trees, and just admit that the rule of law is dead in the world. The power asymmetries are too vast, and the people who rule your life know it.
Sure you can equate the two cases. Someone with power decided weev and Ross would be removed from the board and so they were. Let that be a lesson to anyone who could piss off the truly powerful.
Not all warrants are the same, but I've seen them do a lot of damage (broken-down doors, torn up mattresses, general mayhem) in the course of executing a warrant. This seems in spirit with that.
On a related note, why the hell didn't they get a warrant? I doubt it would have taken long.
However, to play devil's advocate a bit: the FBI essentially saw the output of the PHP call `print_r($_SERVER)`. The only thing that's actually sensitive in there is the server's IP address and hostname. This is not usually considered sensitive information. If it is to be believed that is as far as they went before getting a warrant (and I don't know if that's the case or not), then obtaining the IP address would allow them to actually serve a court order to the hosting provider. In that sense it could be seen as non-invasive and purely conducive to their investigation.
But I agree there should not be a double standard. I think what weev did was not illegal, and what the FBI did here was not illegal, personally.
Could that have been the source of the ip leak?
For instance, the DEA can buy drugs in order to trace and later apprehend drug suppliers. They don't need a warrant to do this (though information gained from such a sting operation may later lead to a judge issuing a search warrant).
apparently "no-knock" warrants that get issued like mints at a dance allow SWAT teams to flashbang a baby and no one faces criminal negligence charges.
I guess regular warrants will let law enforcement really make us grab our ankles and do whatever they want to us.
SR being a drug marketplace and AT&T "not being a drug marketplace" are irrelevent. Law enforcement is tasked with the EXACT SAME LAWS AS EVERYONE ELSE.
Apparently you like to confuse law and justice with "who's got which legislatures on their payroll and can put people down for accessing their stuff without permission"
1) Run the HTTP server in a guest VM to reduce likelihood of hardware identifier leakage (hardware MACs, HD serials, DMI data, CPUID, etc.)
2) Physically separate HTTP server and Tor client, and restrict communication between them to a simple packetized high-speed serial interface.
2a) Consider an inline filter on this link that watches for private keys, etc. and kills communication upon detection, and adds random latency to packets to reduce bandwidth of timing channels.
2b) Physically isolate these systems as much as possible: power line filters, electro-optical couplers for the comm link, etc.
3) Stub out all Tor crypto operations to an HSM; keep the onion key and do all operations on the onion key on that HSM.
4) Make friends in the criminal underground, becasue you're probably going to prison eventually, anyway ;).
There's actually a "better" method. Hash every packet, and add latency not only randomly, but also, for each field in the packet, add latency based on the hash of that field. (In actuality you want a random salt, so, HMAC or similar)
That way, assuming they cannot figure out the salt, you actually actively prevent a large chunk of timing attacks. You're adding deterministic variation that hopefully swamps the server-side variation, deterministic being the key word. You cannot get around it by repeating measurements, as the delay stays the same.
(Alternatively, keep each packet so that exactly 100ms or whatever has passed between getting the packet in and sending the response, silently discarding any packets that take >100ms to process.)
But yes, it reminds me of the hilarity of FBI's characterization of PLA Unit 61398 as some super-scary 'master hackers': real 'master hackers' don't get caught.
Associating with other criminals is a great way to get ratted out. It is also a great way to put yourself on the radar of law enforcement in the first place, so even if you don't hint to your new 'friends' that you are up to something it is still risky.
Here is a better idea: keep it as white collar as possible (no hitmen) and pray for minimum security.
Historically, someone who murders a stranger (ie, serial killers) is nearly always caught because one victim gets away and tells the police. http://www.wired.com/2011/04/mf_billjames/all/ Otherwise, the cops are generally stuck with their habits of sniffing around friends and acquaintances of the victim (which by definition are doing the non-stranger murders).
"Petty criminals break the law. Bigger criminals skirt the law. The big bosses ignore the law. And the biggest criminals of all - they write the law."
A system can only reveal information it knows, so to defend against this "attack vector" when building a secure system, you must ensure that the software knows as little about the system as possible.
There are various ways that don't require NAT, it's more a matter of ensuring the software doesn't need the information it shouldn't know. And both FreeBSD and Linux have tools to lock down a process so it can't find these out (Capsicum and/or SELinux).
What do you folks think?
There is absolutely no reason to think the FBI did their work the difficult-but-legal way when they already get rich feeds from NSA. This is parallel construction all the way. I hope the defense figures out how to make a giant stink about that.
Given that the NSA is able to collect most internet traffic, and they've been sharing info with other agencies, I would think that most evidence against any defendant could be thrown out. Yes, that would be ludicrous, and that's exactly why the NSA needs to be reformed.
I doubt whether a judge will throw out evidence unless you can prove that it is a parallel construction, and that this was illegal.
You can't prove a negative.
It would surprise me if the FBI isn't already doing this for all known onion sites, just to have the info around.
At any rate, sounds like a real basic Opsec101 failure on The Silk Road. Not that hidden services are that safe in the first place. The Tor folks have written about how hidden services are quite vulnerable to some attacks. Not something I'd want to rely on to save me from a 30 year sentence.
Edit: If someone had to run a hidden service, it might make sense to setup another onion site (on another Tor instance) and expose that on port 80, as part of misinformation tactic. Reading several incidents, it seems attackers will use circumstancial evidence to help narrow down the possibilities. By intentionally leaking things (like offhand comments about the weather, using reports from another city) one might be able to gain a few more bits of anonymity back. Of course someone doing such a thing wouldn't setup their illegal site on a public IP in the first place.
The FBI went the other way: they had the SR login page with an an unknown non-TOR IP address and asked "what is at this IP address?" Since it was a captcha image, and captcha images have to be sourced by the web site that is doing the captcha verification, it strongly implied the IP address was the SR server. As it turned out, it was.
As a co-worker said, "When you live in an egg shell, small cracks are deadly."
I know this may sound silly, but there's tons of USB stick sized linux machines that you can plug virtually anywhere. I'm thinking of public spaces such as public wifis. The ToR address can be relayed to other locations if it ever gets breached. And ToR hidden services work great even if you're behind a firewall.
This seems a lot more sensitive than opting for a system where your name is somehow attached to a server.
> yeah it was about 6-9 months before it shut down if I remember correctly, it was pretty highly upvoted and a lot of chatter about it on the SR forums. though I think there were at least a few bugs that they had to shut down for once they realized that their asses were on the line.
So yeah, apparently this was an actual bug.
I know this is in hindsight, but you had to be pretty deep in the reality distorsion field not to see this coming.
SR was up for a long time... and I'm sure made a lot of enemies... not to mention bots trying to pound on it... So it seems very unlikely that something as basic as not escaping user input strings would bring them down...
Somehow "fiddling with inputs" and "noticing an IP", while innocently phrased, sounds unlikely. It's just so hard to know whether this is parallel construction. The FBI's history with false testimony (i.e. COINTELPRO) is more than damning.
The 'not in the US, not subject to US law' is a common trick, and also a way both that the NSA skirts laws about monitoring US citizens (Snowden's leaked documents repeated showed how partners with the NSA regularly traded bulk information on US communications for their own), how the CIA and FBI are able to stop Americans and journalists abroad and how intelligence agencies can propagandize its own citizens while retaining plausible deniability.
The CIA is authorized to, and in fact in the past decade and a half heavily decided to, plant stories to international media wire drops that US journalists subscribe and use as a source of information. The CIA also works with partners overseas and private enterprises to engineer foreign stories that the US media will also follow and use for reports (Lincohn Group, Zarqawi PsyOp, etc). Inevitably some of the material released to the Voice of America and other international propaganda outlets make it back to the US Media.
Furthermore information is now able to be targeted in very sophisticated ways (i.e. MINERVA, USAID Cuba Twitter program) for the spread of ideas and support (or dissidence) via 'social contagions', all without ever having to directly propagandize any citizens. These sorts of programs are the ultimate and extreme irony of the Soft Power philosophy.
I find it unfortunate that the Constitution grants rights restricting search and seizure (and others) but that our representation does not recognize these rights, broadly 'de facto' recognize, as rights worth enforcing outside a strictly domestic interpretation. Are these "rights" or "Rights" if our representation allows and even encourages others to infringe on them?